How converged infrastructure can accelerate the AI journey

Pure Storage's AIRI simplifies and speeds up the process of deploying infrastructure to support artificial intelligence-based systems.

connection network blockchain artificial intelligence ai
Thinkstock

The technology that powers businesses is evolving faster than ever before, allowing us to do more than we ever thought possible. Things that were once only seen in science fiction movies are actually coming to life.

One of these areas is the field of artificial intelligence (AI). We’re on the verge of having machines diagnose cancer, map out the universe, take over dangerous jobs, and drive us around. The downside to the rapid evolution has been a rise in complexity. Putting together the infrastructure and software to power AI-based systems can often take months to build, tune, and tweak so that it runs optimally.

Compounding the difficulty is that AI infrastructure is often deployed by data scientists who do not have the same level of technical acumen as the IT team. 

Pure Storage offers a turnkey approach to AI infrastructure

At Nvidia's GPU Technology Conference (GTC), Pure Storage announced a turnkey solution to simplify the deployment of AI infrastructure. The product, known as AIRI (AI Ready Infrastructure), is a validated, optimized solution that includes Pure Storage’s FlashBlade, 100 Gig-E switches from Arista, four DGX-1 servers from Nvidia, and all the software required to operationalize AI at scale. The product is supported by the Nvidia GPU Cloud deep learning stack and Pure Storage AIRI Scaling Toolkit, enabling data scientists to get to work in a few hours instead of months.

The product is similar to other converged infrastructure products, such as Cisco’s FlexPod and Dell-EMCs VxBlock, that offer a turnkey way for businesses to stand up a private cloud in under a day. I’ve talked to customers of both products, and they told me that the converged products take all the complexity out of the deployment so companies can start using the infrastructure immediately. Converged infrastructure was huge leap forward for private clouds, and I expect it to have a similar impact on AI.

A DIY approach to AI can be filled with complexity and long lead times

With AI, the need for a simpler option is even greater because AI processes, such as learning and inferencing, are extremely data intensive. So, it exposes bottlenecks in the infrastructure, and optimizing the infrastructure can be a long, drawn-out process. For example, TensorFlow software is part of the software stack, and there are hundreds of options for it alone. With AIRI, all of the pre-configuration is done for the deploying organization.

Data scientists are among the highest paid employees in organizations, so having them sit around while the infrastructure components are put together wastes time and costs money.

AIRI is highly scalable and flexible

AIRI offers organizations a massive amount of compute power and storage. The product can initially be configured with a single DGX-1 system, which delivers 1 petaflop of performance, but it can be easily scaled to four, quadrupling the performance. One of the great benefits of Nvidia technology is that it scales linearly, so four DGX-1 systems offer 4x the performance with no degradation. 

pure storage airi Zeus Kerravala

The Pure Storage array can hold up to 15 52 TB blades for a total capacity of over 750 TB. The storage scale is important, as AI requires massive amounts of data in the learning phase. In fact, AI research often depends on the use of generative adversarial networks (GANs) to improve the rate of learning. With GANS, the AI generates large volumes of synthetic data, adding to the high volumes of real data. 

One of the interesting aspects of AIRI is that it uses Ethernet to connect the components. Historically, a product like this would have used InfiniBand because the speeds were higher and latency much lower. Today, particularly with Arista, Ethernet latency has gotten closed to InfiniBand and the speeds are equivalent. Ethernet has a very aggressive evolutionary roadmap out to 400 Gig-E and is simpler and more flexible than InfiniBand, making it a more logical choice.

I believe AI has reached an inflection point, and the use of it will explode over the next few years. Similar to when private clouds came into their own, IT teams will be tasked with bringing together all the necessary storage, servers, software, and network infrastructure required to power the AI programs. One approach is to buy all the building blocks and piece it together, but for most organizations, a better approach is to leverage a turnkey product such as Pure Storage’s AIRI.

Note: Of the vendors named in this post, Arista, Cisco, and Nvidia are clients of ZK Research.  

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Now read: Getting grounded in IoT