The A3 supercomputer's scale can provide up to 26 exaFlops of AI performance, Google says. Google Cloud announced a new supercomputer virtual-machine series aimed at rapidly training large AI models. Unveiled at the Google I/O conference, the new A3 supercomputer VMs are purpose-built to handle the considerable resource demands of a large language model (LLM). “A3 GPU VMs were purpose-built to deliver the highest-performance training for today’s ML workloads, complete with modern CPU, improved host memory, next-generation Nvidia GPUs and major network upgrades,” the company said in a statement. The instances are powered by eight Nvidia H100 GPUs, Nvidia’s newest GPU that just begin shipping earlier this month, as well as Intel’s 4th Generation Xeon Scalable processors, 2TB of host memory and 3.6 TBs bisectional bandwidth between the eight GPUs via Nvidia’s NVSwitch and NVLink 4.0 interconnects. All together, Google is claiming these machines can provide up to 26 exaFlops of power. That’s the cumulative performance of the entire supercomputer, not each individual instance. Still, it blows away the old record for the fastest supercomputer, Frontier, which was just a little over one exaFlop. According to Google, A3 is the first production-level deployment of its GPU-to-GPU data interface, which Google calls the infrastructure processing unit (IPU). It allows for sharing data at 200 Gbps directly between GPUs without having to go through the CPU. This result is a ten-fold increase in available network bandwidth for A3 virtual machines compared to prior-generation A2 VMs. A3 workloads will be run on Google’s specialized Jupiter data center networking fabric, which the company says “scales to tens of thousands of highly interconnected GPUs and allows for full-bandwidth reconfigurable optical links that can adjust the topology on demand.” Google will be offering the A3 in two ways: customers can run it themselves or as a managed service where Google handles most of the work. If you opt to do it yourself, the A3 VMs run on Google Kubernetes Engine (GKE) and Google Compute Engine (GCE). If you go with a managed service, the VMs run on Vertex, the company’s managed machine learning platform. The A3 virtual machines are available for preview, which requires filling out an application to join the Early Access Program. Google makes no promises you will get a spot in the program. Related content news AWS and Nvidia partner on Project Ceiba, a GPU-powered AI supercomputer The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams. By Andy Patrizio Nov 30, 2023 3 mins CPUs and Processors Generative AI Supercomputers news VMware stung by defections and layoffs after Broadcom close Layoffs and executive departures are expected after an acquisition, but there's also concern about VMware customer retention. By Andy Patrizio Nov 30, 2023 3 mins Virtualization Data Center Industry news AI partly to blame for spike in data center costs Low vacancies and the cost of AI have driven up colocation fees by 15%, DatacenterHawk reports. By Andy Patrizio Nov 27, 2023 4 mins Generative AI Data Center opinion Winners and losers in the Top500 supercomputer ranking Besides Nvidia, who had a great showing on the list of the world’s most powerful supercomputers? Almost everyone. By Andy Patrizio Nov 20, 2023 4 mins CPUs and Processors Data Center Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe