While the rest of the computing industry struggles to get to one exaflop of computing, Nvidia is about to blow past everyone with an 18-exaflop supercomputer powered by a new GPU architecture.\nThe H100 GPU, has 80 billion transistors (the previous generation, Ampere, had 54 billion) with nearly 5TB\/s of external connectivity and support for PCIe Gen5, as well as High Bandwidth Memory 3 (HBM3), enabling 3TB\/s of memory bandwidth, the company says. It is the first in a new family of GPUs codenamed \u201cHopper,\u201d after Admiral Grace Hopper, the computing pioneer who created COBOL and coined the term \u201ccomputer bug.\u201d It is due in the third quarter.\n\nThis GPU is meant to power data centers designed to handle heavy AI workloads, and Nvidia claims that 20 of them could sustain the equivalent of the entire world\u2019s Internet traffic.\nHopper also comes with the second generation of Nvidia\u2019s Secure Multi-Instance GPU (MIG) technology, allowing a single GPU to be partitioned to support security in multi-tenant uses. The key change with H100 is the MIGs are now fully isolated with I\/O virtualization and independently secured with confidential computing capabilities each instance.\nResearchers with smaller workloads were required to rent a full A100 CSP instance for isolation. With H100, they can use MIG to securely isolate a portion of a GPU, being assured that their data is secure.\n\u201cNow this computing power can be securely divided between different users and cloud tenants,\u201d said, Paresh Kharya, Nvidia\u2019s senior director of Data Center Computing on the pre-briefing call. \u201cThat\u2019s seven times the MIG capabilities of the previous generation.\u201d\nNew to the H100 is a function called confidential computing, which protects AI models and customer data while they are being processed. Kharya noted that currently, sensitive data is often encrypted at rest and in transit over the network, but is often unprotected during use. Confidential computing addresses this gap by protecting \u00a0data in use, he said.\nHopper also has the fourth-generation NVLink, Nvidia\u2019s high-speed interconnect technology. Combined with a new external NVLink Switch, the new NVlink can connect up to 256 H100 GPUs at nine times higher bandwidth versus the previous generation.\nFinally, Hopper adds new DPX instructions to accelerate dynamic programming, the practice of breaking down problems with combinatorial complexity to simpler subproblems. It is employed in a wide range of algorithms that are used in genomics and graph optimizations. Hopper\u2019s DP instructions will accelerate dynamic programming by seven times, Kharya said.\nPromise of the fastest supercomputer\nPieced together, this technology will be used to create Nvidia DGX H100 systems, 5U rack-mounted units, the building block for powerful DGX SuperPOD supercomputers.\nKharya said the new DGX H100 would offer 32 petaflops of AI performance, six times more than DGX A100 currently on the market. And when combined with the NVLink switch system would create a 32 node DGX SuperPOD that will offer one exaflop of AI performance. It will also offer a bisection bandwidth of 70 terabytes per second, 11 times higher than the DGX A100 SuperPOD.\nTo show off the H100 capabilities, Nvidia is building a supercomputer called Eos with 18 DGX H100 SuperPODs that have 4,608 H100 GPUs joined by fourth generation NVLink and InfiniBand switches, for a total of 18 exaflops of AI performance. To put that in perspective, according to the most recent Top500 list of supercomputers, the peak 8-bit performance of the fastest supercomputer, Fugaku, reaches four exaflops; Nvidia is promising to go four times faster than that.\nEos will provide bare-metal performance with multi-tenant isolation, as well as performance isolation to ensure that one application does not impact any other, said Kharya.\n\u201cEos will be used by our AI research teams, as well as by a numerous other software engineers and teams who are creating our products, including autonomous vehicle platform and conversational AI software,\u201d he said.\nNvidia did not offer a timeline for when Eos would be deployed. DGX H100 PODs and SuperPODs are expected later this year.