VMware, Nvidia offer GPU-powered AI in virtual machines

Expanded partnership aims to make it easier for enterprises to run GPU-accelerated AI applications.

nvidia a100 stock

VMware and Nvidia have expanded their alliance to support Nvidia GPU-based applications on VMware's new vSphere 7 Update 2. The upgraded version of vSphere 7 will support the new Nvidia AI Enterprise offering, a suite of enterprise-grade AI tools and frameworks that enables GPU-accelerated applications to run in VMware virtual machines and containers.

VMware's vSphere 7 U2 adds support for Nvidia's A100 Tensor Core GPU and its multi-instance GPU feature, which allows for partitioning of the cores on an A100 for use by multiple users, much in the same way VMware partitions CPU cores out to multiple users.

This means that AI workloads can now run on VMware's virtualized platform. Up to now, AI workloads have only run on bare-metal servers. AI is nothing if not performance-intensive, and a bare-metal environment delivers the full power of the hardware rather than sharing it in a virtual, multi-tenant scenario.

Nvidia claims in a blog post announcing the new software that AI Enterprise enables virtual workloads to run at near bare-metal performance on vSphere. AI workloads will be able to scale across multiple nodes, allowing even the largest deep-learning training models to run on VMware Cloud Foundation.

With this capability, developers can build scale-out, multi-node performance for CUDA applications, AI frameworks, models and SDKs on the vSphere platform. The AI Enterprise platform is designed to be deployed on Nvidia-certified systems from Dell Technologies, Hewlett Packard Enterprise (HPE), Supermicro, Gigabyte, and Inspur.

To continue reading this article register now