• United States

Nvidia launches new hardware and software for on-prem and cloud providers

News Analysis
Mar 21, 20193 mins
Data Center

The company unveils GPU blades, AI software libraries and low-power GPUs.

Nvidia used its GPU Technology Conference in San Jose to introduce new blade servers for on-premises use and announce new cloud AI acceleration.

The RTX Blade Server packs up to 40 Turing-generation GPUs into an 8U enclosure, and multiple enclosures can be combined into a “pod” with up to 1,280 GPUs working as a single system and using Mellanox technology as the storage and networking interconnect. Which likely explains why Nvidia is paying close to $7 billion for Mellanox.

Instead of AI, where Nvidia has become a leader, the RTX Blade Server is positioned for 3D rendering, ray tracing and cloud gaming. The company said this setup will enable the rendering of realistic-looking 3D images in real time for VR and AR.

Dell EMC, HPE, Lenovo, ASUS and Supermicro were at GTC and all introduced RTX servers.

On the AI side of things, Nvidia introduced CUDA-X AI, which it claims is the world’s only end-to-end acceleration library for data science. CUDA is Nvidia’s language using a C++ syntax to specifically program its GPUs.

The typical workflow for deep learning, machine learning and data analytics is data processing, feature determination, training, verification and deployment. These are all very different steps in the process and typically require different types of processing. CUDA-X AI uses the NVIDIA Tensor Core GPUs to address the end-to-end AI pipeline.

And it has considerable adoption out of the gate. CUDA-X AI has been adopted by all the major cloud services, like Amazon Web Services, Google Cloud Platform and Microsoft Azure, and it has been adopted by Charter, PayPal, SAS and Walmart.

For on-prem servers, Nvidia introduced a new generation of T4 GPU processors that CEO Jen-Hsun Huang said only draws 70 watts of power, a big reduction from the usual power hogging of GPUs, is “the size of a candy bar” and it fits into every single one of the high-volume most popular data center servers in the world.

And as is always the case, Nvidia announced major server vendor support. Cisco, Dell EMC, Fujitsu, HPE, Inspur, Lenovo and Sugon all now offer Nvidia T4 GPU servers for data analytics, machine learning and deep learning.

In addition, Amazon Web Services announced it will release its latest GPU-equipped instance with support for NVIDIA’s T4 Tensor Core GPUs, with a focus on machine learning workloads. Amazon’s Elastic Container Service for Kubernetes will support T4.

“Because T4 GPUs are extremely efficient for AI inference, they are well-suited for companies that seek powerful, cost-efficient cloud solutions for deploying machine learning models into production,” said Ian Buck, vice president and general manager and of accelerated computing at Nvidia, in a blog post.

Andy Patrizio is a freelance journalist based in southern California who has covered the computer industry for 20 years and has built every x86 PC he’s ever owned, laptops not included.

The opinions expressed in this blog are those of the author and do not necessarily represent those of ITworld, Network World, its parent, subsidiary or affiliated companies.

More from this author