Intel formally announced a new class of Xeon Scalable processors that in many ways leapfrogs the best AMD has to offer. Credit: Intel I do love seeing the chip market get competitive again. Intel has formally announced a new class of Xeon Scalable processors, code-named “Cascade Lake-AP” or Cascade Lake Advanced Performance, that in many ways leapfrogs the best AMD has to offer. The news comes ahead of the Supercomputing 18 show and was likely done to avoid being drowned out in the upcoming news. It also comes one day ahead of an AMD announcement, which should be hitting the wires as you read this. I don’t think that’s a coincidence. The Cascade Lake-AP processors come with up to 48 cores and support for 12 channels of DDR4 memory, a big leap over the old design and a leap over AMD’s Epyc server processors, as well. Intel’s current top-of-the-line processor, the Xeon Platinum 8180, has only 28 cores and six memory channels, while the AMD Epyc has 32 cores and eight memory channels. To get to 48 cores, Intel had to do something it once derided. Cascade Lake-AP models use what is called a Multi-Chip Package (MCP) design, where the CPU is actually four CPU packages connected by a very high-speed interconnect. Last year it famously ridiculed the Epyc design as “four glued-together desktop die.” That’s how Epyc achieved 32 cores. It is four eight-core dies connected by what AMD calls Infinity Fabric. Well, now Intel is doing it. Its most recent chips are single packages with 28 cores, and obviously the laws of physics were getting in the way. It’s easier to build smaller packages of 8-12 cores and tie them together than to build one monolithic package. So, the Cascade Lake-AP uses a pair of 24-core packages bound by high-speed interconnects. This is a better manufacturing technique than building one big 48 core chip because if there is a flaw in one of the 48 cores, the whole chip is useless. In 24 cores, there’s less chance for something to go wrong. This translates to a cheaper chip for the customer. Intel did confirm that this chip has fixes in it for the Meltdown and Spectre vulnerabilities, making it the first chip with hardware fixes to mitigate the issues. The one thing Intel has not said is if the processors will feature hyper-threading, which would equate to 96 threads per chip, and it’s not a given. Intel would say only that it’s based on the Skylake architecture, which has hyper-threading, but would not say definitively that the chip has hyper-threading. Intel targets HPC and AI markets Intel is targeting the HPC and artificial intelligence (AI) crowds and is making some bold claims, such as Linpack up to 1.21X versus Intel Xeon Scalable 8180 processor and 3.4X versus AMD EPYC 7601, Stream Triad up to 1.83X versus Intel Scalable 8180 processor, and 1.3X versus AMD EPYC 7601 and up to 17X more AI/deep learning inference performance than its Xeon Scalable 8180. The processor uses the same 14nm design process as previous Cascade Lake chips, but Intel would not say if it used the same LGA3647 socket. If it does, Cascade Lake-AP would be hamstrung in some ways because the same number of pins as older chips means it would have the same number of PCIe 3.0 lanes, the same memory speed, and the same Ultra Path Interconnect (UPI) speed as its predecessors. Intel’s Xeon Scalable processors are expected to be available in the first half of 2019. Related content news analysis AMD launches Instinct AI accelerator to compete with Nvidia AMD enters the AI acceleration game with broad industry support. First shipping product is the Dell PowerEdge XE9680 with AMD Instinct MI300X. By Andy Patrizio Dec 07, 2023 6 mins CPUs and Processors Generative AI Data Center news analysis Western Digital keeps HDDs relevant with major capacity boost Western Digital and rival Seagate are finding new ways to pack data onto disk platters, keeping them relevant in the age of solid-state drives (SSD). By Andy Patrizio Dec 06, 2023 4 mins Enterprise Storage Data Center news Omdia: AI boosts server spending but unit sales still plunge A rush to build AI capacity using expensive coprocessors is jacking up the prices of servers, says research firm Omdia. By Andy Patrizio Dec 04, 2023 4 mins CPUs and Processors Generative AI Data Center news AWS and Nvidia partner on Project Ceiba, a GPU-powered AI supercomputer The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams. By Andy Patrizio Nov 30, 2023 3 mins CPUs and Processors Generative AI Supercomputers Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe