Nvidia introduces Spectrum-4 platform for AI, HPC over Ethernet

Based on a new ASIC, DPU, smartNIC, and SDK, Nvidia ups the speed and efficiency of its switching platform.

data center

Nvidia is known for its GPUs, but has introduced Spectrum-4, a combination of networking technologies that reinforces its commitment not only to graphics processors, but also to systems designed to handle the demanding network workloads of AI and high-performance computing.

The latest Nvidia Spectrum products rely on the new Spectrum-4 Ethernet-switch ASIC that boasts 51.2 Tb/s switching and routing capacity. The chip underpins the latest members of the company’s Spectrum switches, which are available later this year. The switches are part of a larger Spectrum-4 platform that integrates Nvidia’s ConnectX-7 smartNIC, its new BlueField-3 DPU, and its DOCA software-development platform.

The company introduced Spectrum-4 at its GPU Technology Conference this week.

The Spectrum-4 SN5000 Ethernet switch family can support 128 ports of 400GbE, combined with adaptive routing and enhanced congestion control mechanisms to optimize RDMA over Converged Ethernet (RoCE) fabrics.

It will handling massive data sets, such as those needed for modeling entire cars, entire factories, and even the entire Earth for weather modeling, said Kevin Deierling, vice president of networking at Nvidia. Ethernet wasn’t designed for these massive data sets; it was designed for small packet exchanges, or what Nvidia calls “mice flows,” he said. Giant data sets for HPC and AI are what he referred to as “elephant flows” that can overwhelm traditional Ethernet architectures.

To continue reading this article register now