Supercomputers built using Nvidia Corp.’s most advanced platforms are leading the fight against the coronavirus pandemic, helping researchers gain more insights into the nature of the SARS-CoV-2 virus and creating new artificial intelligence models to accelerate drug discovery.
Nvidia revealed today how a team led by Arvind Ramanathan, a computational biologist at the Argonne National Laboratory, designed a new workflow to study the virus that runs across multiple supercomputing systems, including the Nvidia A100 graphics processing unit-powered Perlmutter. Ramanathan’s researchers have created a way to improve the resolution of traditional tools used to explore DNA, creating fresh insights that may help to arrest the spread of COVID-19.
“The capability to perform multisite data analysis and simulations for integrative biology will be invaluable for making use of large experimental data that are difficult to transfer,” Ramanathan’s researchers said.
The work included the development of a new technique that makes it possible to accelerate molecular dynamics research by running the Nanoscale Molecular Dynamics program on GPUs. Combined with Nvidia’s NVLink networking technology, the team was able to process data far beyond what was previously possible with conventional high-performance computing interconnects.
In a separate project, a team led by Ivan Oleynick from the University of Florida created what they say is the world’s first simulation of a billion atoms using Nvidia-powered machines. The teams says the simulation of carbon atoms under extreme temperature and pressures could open doors to new energy sources and help astronomers to discover the makeup of distant exoplanets. They added that the simulation offers quantum-level accuracy that faithfully reflects the forces among each of the billion atoms.
“It’s accuracy we could only achieve by applying AI techniques,” Oleynik said.
The simulation was created using an InfiniBand-connected system made up of a stunning 27,900 GPUs on the Summit supercomputer built by IBM Corp. The team demonstrated that it’s possible to scale the simulation to 20 billion or more atoms if required.
A second billion-atom simulation, meanwhile, was carried out by a team led by researcher Rommie Amaro at the University of California at San Diego. That effort, which also relied on Summit’s computing power, was aimed at improving understanding of COVID-19 by simulating the Delta variant in an airborne droplet. The work is said to have provided insights into how the virus binds itself in the deep lung, and could also aid our understanding of the progression of severe diseases such as cancer and cystic fibrosis, the team said.
“We demonstrated how AI coupled to HPC at multiple levels can result in significantly improved effective performance,” the paper said.
One final effort involved applying natural language processing to the problem of screening chemical compounds for new drugs. Jens Glaser, a computational scientist at Oak Ridge National Laboratory, said his team used 24,000 Nvidia GPUs on Summit to train a BERT NLP model that’s able to speed up drug discovery on a dataset containing 9.6 billion molecules, in just two hours. Previously, it took up to four days to train similarly sized models, Nvidia said.
“We’re just scratching the surface of training data sizes — we hope to use a trillion molecules soon,” said Andrew Blanchard, a research scientist who led the team.
All four projects have been nominated for a prize at the Gordon Bell awards, which are widely considered to be the equivalent of a Nobel prize in high-performance computing.
Nvidia’s newest InfiniBand accelerates HPC
In other news revealed today, Nvidia said the academic research community has wasted little time in putting its latest supercomputer networking technology to use. Two universities have announced plans to plug into the Nvidia’s new Quantum-2 InfiniBand platform that was unveiled during last week’s GTC 2021 event.
Nvidia’s InfiniBand is a computer networking communications standard used in high-performance computing for data flow between processors and input/output devices, known for its high throughput and very low latency.
The latest generation of InfiniBand, Nvidia Quantum-2, introduces new features that will accelerate the most demanding workloads in supercomputers. Quantum-2 doubles network speed while tripling the number of network ports available, meaning overall performance can be accelerated by up to three-times compared with existing systems.
Among the first to deploy Quantum-2 InfiniBand will be Texas A&M. The university said it’s planning to use the 400G InfiniBand network in its ACES supercomputer to connect researchers to a mix of five accelerators from four vendors.
“Besides the obvious two-times jump in throughput from Nvidia Quantum-1 InfiniBand at 200G, it will provide improved total cost of ownership, beefed up in-network computing features and increased scaling,” said Honggao Liu, ACES’s principal investigator and project director.
The other early adopter is Mississippi State University, which is planning to deploy Quantum-2 InfiniBand to expand its Orion supercomputer that runs massive weather forecasting jobs for the U.S. National Oceanic and Atmospheric Administration. The system was ranked as the fourth-largest academic supercomputer in the U.S. when it first went live in June 2019.
“HPC is going everywhere,” said Gilad Shainer, senior vice president of marketing at Nvidia. That, he noted, will require the kind of cloud-native supercomputing Quantum-2 brings. “It’s bringing supercomputing into the cloud,” he said.
Europe gets a new AI research lab
Nvidia also revealed that it has partnered with the French information technology giant Atos SE on an initiative aimed at advancing European computing technologies, education and research. The new Excellence AI Lab, or EXAIL, is building an exascale-class supercomputer that will be powered by Nvidia’s GPUs and Arm-based Grace central processing units, with Quantum-2 InfiniBand networking and Atos’ BXI Exascale Interconnects.
The lab is planning to accelerate research into five key areas where it believes its new supercomputer will be able to make a real impact: climate research, healthcare and genomics, hybridization with quantum computing, edge AI and computer vision, and cybersecurity.
With reporting from Robert Hof