Nvidia gives glimpse of the future at its GPU Technology Conference

Connected things, virtual reality, augmented reality, deep learning and artificial intelligence are about to converge and change the way we live and work

Nvidia gives glimpse of the future at its GPU Technology Conference
Credit: Thinkstock

Historically, GPUs have been used in graphics-heavy processes such as video games. It’s fair to say that to serious gamers, Nvidia-based graphics cards have become the de facto standard. However, as I pointed out previously, GPUs have become increasingly more important in applications such as artificial intelligence (AI), virtual reality (VR) and analytics.

I was fortunate to attend Nvidia’s annual GPU Technology Conference last week, and the keynote from CEO Jensen Huang was perhaps the most innovative future-looking session I have seen in a long time.

+ Also on Network World: Nvidia GPU-powered autonomous car teaches itself to see and steer +

I believe we are in the very early stages of a “perfect storm” where connected things, VR, augmented reality (AR), deep learning and AI are going to come together and significantly change the way we live and work. Below are some of the highlights from Huang’s keynote of cool things he expects will become reality in the very near future. 

Project Holodeck

Star Trek fans will be familiar with the concept of a Holodeck—a fully virtual world where the laws of the physical world apply. However, the Nvidia version will be used for practical things instead of enabling Commander Data to pretend he’s Sherlock Holmes.

project holodeck nvidia Nvidia

Huang’s keynote included a demonstration of a car manufacturer, Koenisegg, using the virtual world to experiment with a new automobile. The Holodeck made it easy for engineers to change aspects of the car without requiring an actual vehicle. Since the environment models the real world, Koenisegg can use it to test-drive the car and run different simulations. Holodeck enables companies to simulate and experiment without having to incur the cost of having to build the physical elements.

Isaac the robot simulator

Named after two famous Isaacs (Asimov and Newton, not the bartender from The Love Boat), this uses graphics like those in video games to simulate real-life environments to train a robot. Consider the task of teaching a robot to walk. Engineers would need to set up scenarios where the robot walks uphill, downhill, upstairs, on a gravel road, on slippery surfaces and in any other environment one can think of. Each scenario requires setting up the robot and the surroundings.

During the keynote, Nvidia demonstrated a robot shooting a hockey puck. Every shot required moving the net to a different location so it could eventually learn how to shoot in any scenario. With a physical robot, time is required to move the net, measure the distance and run the test. Using Isaac, dozens or even hundreds of simulations could be programmed and run, shortening the teaching time from months to days or even hours.

I asked Huang what “ten-year out” problem he was thinking about today, and he said ideally he believes we can have robots learn in something like Isaac and then just wake up and be fully functional. Nvidia also announced a set of robot reference designs to speed up robot manufacturing using the Jetson GPU.

The ‘big bang’ of modern AI

AI has been theorized for decades, but only recently have we started seeing it come to life. Recently, there has been a flurry of AI use cases highlighted in the media. This includes an AI playing Go and Poker, writing a news story and calculating insurance claims. During the keynote, Nvidia highlighted a plethora of AI use cases, including video analytics, deep voice, transfer learning and simulations. Make no mistake: The AI era is here, and over the next year, we’ll see more AI use cases than we have throughout history.

Ray tracing

Technically this is an AI use case but one I thought was worth calling out. Anyone who has loaded ultra high-resolution images knows how slow this can be. For example, an MRI is an incredibly dense image that can take several minutes to render properly on a screen. With ray tracing, once the image starts to render, the AI can infer the rest of the image and speed up the process.

deep learning for ray tracing nvidia Nvidia

The quality of the application was outstanding. One of the demos used an image of a dinner a table. The AI rendered it in perhaps a quarter of the time and was so detailed that it showed shadows, reflections in wine glasses and other details. This could have profound impact on healthcare and other industries that rely on dense, large images.

AI and deep learning presents a significant opportunity to make Nvidia as important to the next era of computing as Intel was to the growth of PCs. Innovation in new areas such as deep learning is often slow to take off until there are some good examples that “prime the pump”—then the market takes off.

Deep Learning Institute

At GTC 2016, Nvidia launched the Deep Learning Institute (DLI), which is an entire learning environment for developers, researchers, start-ups and data scientists who want to learn how to leverage deep learning. The benefit to Nvidia is that when people go through DLI, they learn how to do things the Nvidia way.

This is similar to the approach Cisco took in the early days of the internet with Network Academy. It educated a world of network engineers who knew networking but more important to Cisco, they knew how to build networks best using Cisco gear. Network Academy and certification were significant contributors to Cisco holding the dominant share it has today.

Nvidia is creating an industry of deep learning experts that will consider Nvidia the de facto standard. After one year, DLI should be considered a smashing success, as it already has roughly 15,000 members. The next step is for the company to create certifications to formally recognize deep learning competency. I have been told that the company will roll these out within the next year.

The internet revolutionized the world, and CPUs were at the heart of that era. We’re on the precipice of seeing the world change again, this time driven by AI and VR—and the GPU is core to this revolution. AT GTC 2017, we caught a glimpse of the future, but the future will be here faster than we know it.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: 10 new UI features coming to Windows 10