Intel continues to optimize its products around AI

Intel made a series of processor and memory announcements aimed at the data center and artificial intelligence, including new Xeon chips and its Intel Optane DC persistent memory.

Intel continues to optimize its products around AI
Getty Images

Normally, this is the time of year when Intel would hold its Intel Developer Forum conference, which would be replete with new product announcements. But with the demise of the show last year, the company instead held an all-day event that it live-streamed over the web.

The company’s Data Centric Innovation Summit was the backdrop for a series of processor and memory announcements aimed at the data center and artificial intelligence, in particular. Even though Intel is without a leader, it still has considerable momentum. Navin Shenoy, executive vice president and general manager of the Data Center Group, did the heavy lifting.

News about Cascade Lake, the rebranded Xeon server chip

First is news around the Xeon Scalable processor, the rebranded Xeon server chip. The next-generation chip, codenamed “Cascade Lake,” will feature a memory controller for Intel’s new Intel Optane DC persistent memory and an embedded AI accelerator that the company claims will speed up deep learning inference workloads by eleven-fold compared with current-generation Intel Xeon Scalable processors.

Cascade Lake also will provide enhanced security features to address Spectre and Meltdown vulnerabilities, plus an AI extension called Intel Deep Learning Boost that extends the Intel AVX 512 and other instructions designed for AI. Cascade Lake is scheduled to begin shipping late this year.

The next chip after that, in late 2019, will be Cooper Lake. Intel did not go into great detail except to say it would offer a general set of performance improvements, plus improvements for AI training workloads. Ice Lake is set for 2020, and the only detail on that was it would be created on a 10nm manufacturing process vs. the 14nm for Cascade Lake and Cooper Lake. Intel has struggled for years to get its chips down to 10nm, and it has been one of the company’s biggest failings in recent years.

Xeon was not optimized for AI even two years ago, said Shenoy, but has since improved performance on inference by 5.4x in Skylake, the latest architecture used in the Xeon Scalable platform.

Intel ships first production units of Optane DC

Shenoy also gave an update on its Intel Optane DC persistent memory, which is a new class of memory and storage that is somewhere between DDR and NAND flash memory in terms of speed and performance. And that’s exactly where it sits, between DRAM and SSDs, acting as a SSD cache and able to achieve up to eight times the performance of a DRAM-only scenario.

Intel said it shipped the first production units of Optane to Google and general availability is planned for 2019.

Nervana AI processor set for release

Intel owns two AI processors, Nervana and Movidius, and the company said the first commercial Nervana chip, the NNP L–1000, is set for release in 2019. A previous chip was made available to developers to start working on apps, but this will be the first mass market processor. Intel claims a three- to four-fold improvement in training performance over the first-generation NNP chip.

Intel has had it rough lately: losing its CEO; facing a revitalized AMD; dealing with the Spectre and Meltdown bugs; Nvidia eating its breakfast, lunch, and dinner in the AI space; and its tremendous struggle to get to 10nm manufacturing. So far, though, it continues to execute on its roadmap. Now it needs to deliver on the promises.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Now read: Getting grounded in IoT