Normally, this is the time of year when Intel would hold its Intel Developer Forum conference, which would be replete with new product announcements. But with the demise of the show last year, the company instead held an all-day event that it live-streamed over the web.\nThe company\u2019s Data Centric Innovation Summit was the backdrop for a series of processor and memory announcements aimed at the data center and artificial intelligence, in particular. Even though Intel is without a leader, it still has considerable momentum. Navin Shenoy, executive vice president and general manager of the Data Center Group, did the heavy lifting.\nNews about Cascade Lake, the rebranded Xeon server chip\nFirst is news around the Xeon Scalable processor, the rebranded Xeon server chip. The next-generation chip, codenamed \u201cCascade Lake,\u201d will feature a memory controller for Intel\u2019s new Intel Optane DC persistent memory and an embedded AI accelerator that the company claims will speed up deep learning inference workloads by eleven-fold compared with current-generation Intel Xeon Scalable processors.\n\nCascade Lake also will provide enhanced security features to address Spectre and Meltdown vulnerabilities, plus an AI extension called Intel Deep Learning Boost that extends the Intel AVX 512 and other instructions designed for AI. Cascade Lake is scheduled to begin shipping late this year.\nThe next chip after that, in late 2019, will be Cooper Lake. Intel did not go into great detail except to say it would offer a general set of performance improvements, plus improvements for AI training workloads. Ice Lake is set for 2020, and the only detail on that was it would be created on a 10nm manufacturing process vs. the 14nm for Cascade Lake and Cooper Lake. Intel has struggled for years to get its chips down to 10nm, and it has been one of the company\u2019s biggest failings in recent years.\nXeon was not optimized for AI even two years ago, said Shenoy, but has since improved performance on inference by 5.4x in Skylake, the latest architecture used in the Xeon Scalable platform.\nIntel ships first production units of Optane DC\nShenoy also gave an update on its Intel Optane DC persistent memory, which is a new class of memory and storage that is somewhere between DDR and NAND flash memory in terms of speed and performance. And that\u2019s exactly where it sits, between DRAM and SSDs, acting as a SSD cache and able to achieve up to eight times the performance of a DRAM-only scenario.\nIntel said it shipped the first production units of Optane to Google and general availability is planned for 2019.\nNervana AI processor set for release\nIntel owns two AI processors, Nervana and Movidius, and the company said the first commercial Nervana chip, the NNP L\u20131000, is set for release in 2019. A previous chip was made available to developers to start working on apps, but this will be the first mass market processor. Intel claims a three- to four-fold improvement in training performance over the first-generation NNP chip.\nIntel has had it rough lately:\u00a0losing its CEO; facing a revitalized AMD; dealing with the Spectre and Meltdown bugs; Nvidia eating its breakfast, lunch, and dinner in the AI space; and its tremendous struggle to get to 10nm manufacturing. So far, though, it continues to execute on its roadmap. Now it needs to deliver on the promises.