Nvidia, Citrix crank up virtual desktop delivery

The companies are making hardware and software to deliver more virtual machine per GPU

Improvements to Nvidia's virtualization technology are aimed at turning graphics processors into a more important resource in data centers and could speed deployment of virtual desktops and delivery of data over the cloud.

The graphics company Tuesday announced improvements to its VGX technology, which virtualizes the GPU and makes it a resource that can be shared with CPUs and memory in servers. Hardware and software improvements to VGX will allow a graphics board to deliver multiple virtual desktops. Previously, VGX could deliver only one virtual machine per graphics board.

Virtualization enables efficient use of server resources in a distributed computing environment and GPUs could help cut electric bills through fast delivery of virtual desktops. GPUs are considered faster than CPUs in some cases and are used in some of the world's fastest computers for complex calculations, as well as by Web browsers for faster graphics rendering. Virtualizing graphics processors could enable servers to deliver games over the cloud and make high-performance resources available to remote users.

Nvidia worked with virtualization company Citrix to make improvements at the hypervisor, driver and hardware levels, said Sanford Russell, director of Grid marketing at Nvidia. The VGX improvements will work only with Citrix's Xen products, including XenServer and XenApp. Ultimately, Nvidia hopes to bring the VGX improvements to virtualization technologies from VMware and Microsoft, but Russell could not provide a specific date on when that may happen.

Graphics processors from Nvidia and Advanced Micro Devices are already being used for virtualization, with server makers Dell, Hewlett-Packard and IBM offering servers designed for the hyperscale environment. But the virtual desktop user sessions were shorthanded by limited resources.

The updates to VGX will deploy virtual desktops running full Windows 7 and users will be able to run multiple applications in each session, Russell said.

"What we are delivering is a true PC experience," Russell said.

The Nvidia Grid K1 graphics board, which has four graphics processors and 16GB of DDR3 memory, will be able to support up to 32 virtual machines simultaneously. The Grid K2 board, which has two graphics processors and 8GB of GDDR5 memory, will be able to support up to eight VMs. The virtual machines will be able to tap into on-board DirectX 11 support to boost multimedia performance.

The graphics processors are based on Nvidia's latest Kepler architecture and the boards have independent schedulers and memory management units to handle virtual-machine deployment. VGX enables GPUs to skip CPU cycles and to directly deploy and manage virtual machines.

Virtualization is key in rendering cloud services via GPUs, but the VGX improvements could be ahead of their time, said Jim McGregor, principal analyst at Tirias Research.

"This is overkill for what people need and not everyone will make use of the resource," McGregor said.

Nvidia is offering a new set of server products and graphics boards under the brand name Grid as the company connects GPUs to the growing number of virtualization and cloud deployments. Nvidia offers the Grid Visual Computing Appliance (VCA), which does server-side processing of multimedia and other applications for cloud-based delivery to virtual desktops on thin clients, PCs or tablets. The company has also partnered with server makers IBM and Dell to offer GPU-rich Grid servers. Nvidia also said Cisco will start shipping the VGX Grid server called UCS C240 M3 starting this month.

But GPUs are already becoming more practical in servers with gaming going online and more applications being written using parallel programming tools like OpenCL and CUDA, McGregor said.

Servers deal with different types of workloads and GPUs still require CPUs to function in distributed computing environments. Instructions to the GPU are funnelled through the CPU.

"Using [GPUs] as a processor architecture in the cloud is no different than using a [CPU] or custom processor," McGregor said.

Nvidia and AMD are designing chips and establishing open standards that make GPUs a more accessible resource. The AMD-led HSA (Heterogeneous System Architecture) Foundation has introduced a uniform memory architecture called HUMA, which will make different memory types accessible to all processors. Nvidia's next graphics processor called Maxwell, which is due next year, will also pool together CPU and GPU memory.

Technologies like VGX make GPUs more relevant in server environments, McGregor said.

"It has a solid road map," McGregor said.

Agam Shah covers PCs, tablets, servers, chips and semiconductors for IDG News Service. Follow Agam on Twitter at @agamsh. Agam's e-mail address is agam_shah@idg.com

Editors' Picks
Join the discussion
Be the first to comment on this article. Our Commenting Policies