Intel on Tuesday introduced its second-generation Xeon Scalable Processors for servers, developed under the codename Cascade Lake, and it\u2019s clear AMD has lit a fire under a once complacent company.\nThese new Xeon SP processors max out at 28 cores and 56 threads, a bit shy of AMD\u2019s Epyc server processors with 32 cores and 64 threads, but independent benchmarks are still to come, which may show Intel having a lead at single core performance.\nAnd for absolute overkill, there is the Xeon SP Platinum 9200 Series, which sports 56 cores and 112 threads. It will also require up to 400W of power, more than twice what the high-end Xeons usually consume.\n\nThe new processors were unveiled at a big event at Intel\u2019s headquarters in Santa Clara, California, and live-streamed on the web. Newly minted CEO Bob Swan kicked off the event, saying the new processors were the \u201cfirst truly data-centric portfolio for our customers.\u201d\n\u201cFor the last several years, we have embarked on a journey to transform from a PC-centric company to a data-centric computing company and build the silicon processors with our partners to help our customers prosper and grow in an increasingly data-centric world,\u201d he added.\nHe also said the move to a data-centric world isn\u2019t just CPUs, but a suite of accelerant technologies, including the Agilex FPGA processors, Optane memory, and more.\nThis launch is the largest Xeon launch in the company\u2019s history, with more than 50 processor designs across the Xeon 8200 and 9200 lines. While something like that can lead to confusion, many of these are specific to certain workloads instead of general-purpose processors.\nCascade Lake chips are the replacement for the previous Skylake platform, and the mainstream Cascade Lake chips have the same architecture as the Purley motherboard used by Skylake. Like the current Xeon Scalable processors, they have up to 28 cores with up to 38.5 MB of L3 cache, but speeds and feeds have been bumped up.\nThe Cascade Lake generation supports the new UPI (Ultra Path Interface) high-speed interconnect, up to six memory channels, AVX-512 support, and up to 48 PCIe lanes. Memory capacity has been doubled, from 768GB to 1.5TB of memory per socket. They work in the same socket as Purley motherboards and are built on a 14nm manufacturing process.\nSome of the new Xeons, however, can access up to 4.5TB of memory per processor: 1.5TB of memory and 3TB of Optane memory, the new persistent memory that sits between DRAM and NAND flash memory and acts as a massive cache for both.\nBuilt-in fixes for Meltdown and Spectre vulnerabilities\nMost important, though, is that these new Xeons have built-in fixes for the Meltdown and Spectre vulnerabilities. There are existing fixes for the exploits, but they have the effect of reducing performance, which varies based on workload. Intel showed a slide at the event that shows the company is using a combination of firmware and software mitigation.\nNew features also include Intel Deep Learning Boost (DL Boost), a technology developed to accelerate vector computing that Intel said makes this the first CPU with built-in inference acceleration for AI workloads. It works with the AVX-512 extension, which should make it ideal for machine learning scenarios.\nMost of the new Xeons are available now, except for the 9200 Platinum, which is coming in the next few months. Many Intel partners \u2013 Dell, Cray, Cisco, Supermicro \u2013 all have new products, with Supermicro launching more than 100 new products built around Cascade Lake.\nIntel also rolls out\u00a0Xeon D-1600 series processors\nIn addition to its hot rod Xeons, Intel also rolled out the Xeon D-1600 series processors, a low power variant based on a completely different architecture. Xeon D-1600 series processors are designed for space and\/or power constrained environments, such as edge network devices and base stations.\nAlong with the new Xeons and FPGA chips, Intel also announced the Intel Ethernet 800 series adapter, which supports 25, 50 and 100 Gigabit transfer speeds.\nThank you, AMD. This is what competition looks like.