VMware’s Bitfusion acquisition could be a game-changer for GPU computing

VMware will integrate Bitfusion technology into vSphere, bolstering VMware’s strategy of supporting AI- and ML-based workloads by virtualizing hardware accelerators.

VMware’s Bitfusion purchase could be a game-changer for GPU computing
Vladimir Timofeev / Getty Images

In a low-key move that went under the radar of a lot of us, last week VMware snapped up a startup called Bitfusion, which makes virtualization software for accelerated computing. It improves performance of virtual machines by offloading processing to accelerator chips, such as GPUs, FPGAs, or other custom ASICs.

Bitfusion provides sharing of GPU resources among isolated GPU compute workloads, allowing workloads to be shared across the customer’s network. This way workloads are not tied to one physical server but shared as a pool of resources, and if multiple GPUs are brought to bear, performance naturally increases.

“In many ways, Bitfusion offers for hardware acceleration what VMware offered to the compute landscape several years ago. Bitfusion also aligns well with VMware’s ‘Any Cloud, Any App, Any Device’ vision with its ability to work across AI frameworks, clouds, networks, and formats such as virtual machines and containers,” said Krish Prasad, senior vice president and general manager of the Cloud Platform Business Unit at VMware, in a blog post announcing the deal.

When the acquisition closes, VMware will integrate Bitfusion technology into vSphere. Prasad said the inclusion of Bitfusion will bolster VMware’s strategy of supporting artificial intelligence- and machine learning-based workloads by virtualizing hardware accelerators.

“Multi-vendor hardware accelerators and the ecosystem around them are key components for delivering modern applications. These accelerators can be used regardless of location in the environment—on-premises and/or in the cloud,” he wrote. The platform can be extended to support other accelerator chips, such as FGPAs and ASICs, he wrote.

Prasad noted that hardware accelerators today are deployed “with bare-metal practices, which force poor utilization, poor efficiencies, and limit organizations from sharing, abstracting, and automating the infrastructure. This provides a perfect opportunity to virtualize them—providing increased sharing of resources and lowering costs.”

He added: “The platform can share GPUs in a virtualized infrastructure as a pool of network-accessible resources rather than isolated resources per server.”

This is a real game-changer, much the way VMware added storage virtualization and software-defined networks (SDN) to expand the use of vSphere. It gives them a major competitive advantage over Microsoft Hyper-V and Linux’s KVM now as well.

By virtualizing and pooling GPUs, it lets users bring multiple GPUs to bear rather than locking one physical processor to a server and application. The same applies to FPGAs and the numerous AI processor chips either on or coming to market.

VMware also buys Uhana

That wasn’t VMware’s only purchase. The company also acquired Uhana, which provides an AI engine specifically for telcos and other carriers that discovers anomalies in the network or application, prioritizes them based on their potential impact, and automatically recommends optimization strategies. That means improved network operations and operational efficiently.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Take IDG’s 2020 IT Salary Survey: You’ll provide important data and have a chance to win $500.