Intel today launched the third generation of its Xeon Scalable server-processor line with more than three dozen new chips built on its long-overdue 10-nanometer manufacturing process and featuring a host of specialized features for security and AI.\nThe new chips were developed under the codename Ice Lake and were long in coming, due to the delays Intel had getting its manufacturing process down to 10nm. AMD, through its TSMC manufacturing partner, is at 7nm and its Epyc processors are slowly but increasingly taking market share from Intel.\n\nIntel says the Ice Lake series has a 20% improvement in the number of instructions that can be carried out per clock cycle over the prior generation, thanks to the smaller process node letting them cram more transistors into the package.\nOne measure is that the new top-of-line Xeon Platinum 8380 has 40 cores with 80 threads at a base frequency of 2.3Ghz. The prior top of the line had 28 cores\/56 threads. Intel says customers should expect an average 46% performance improvement in \u201cpopular data-center workloads\u201d when compared with its previous-generation server CPUs. Compared with a five-year-old server, Ice Lake-based machines will perform 2.65 times faster, Intel says.\nThe platform supports up to 6TB of system memory per socket, up to eight channels of DDR4-3200 memory per socket and up to 64 lanes of PCIe Gen4 per socket.\n40 chips for three markets\nAll told, Intel is releasing 40 different chips for three different markets. For cloud providers, the new Xeons are engineered and optimized for the requirements of cloud workloads and support a wide range of service environments.\nFor networks, Intel\u2019s network-optimized N-SKUs are designed to support network environments and optimized for multiple workloads and performance levels. Intel claims this generation of Xeon Scalable processors delivers on average 62% more performance on a range of broadly deployed network and 5G workloads over the prior generation of processors.\nFor the intelligent edge, the new processors deliver the performance, security and operational controls required for AI, complex image or video analytics, and consolidated workloads. The platform delivers up to 1.56 times more AI inference performance for image classification than the prior generations.\nIntel is also high on security, something AMD Epyc has had from its first generation. In an online briefing with the press, Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel, talked up the Xeon\u2019s new Software Guard Extension (SGX), which allows the CPU to turn parts of a server\u2019s memory into secure \u201cenclaves\u201d storing sensitive data such as encryption keys. Data in secure enclaves is inaccessible to other applications running on the same server, even if they otherwise have full administrator-level access to the machine.\nThe new Xeon also has built-in crypto acceleration, fittingly called Intel Crypto Acceleration. It delivers performance improvements on major cryptographic algorithms such as AES, SHA, and GFNI to allow real-time encryption without affecting performance.\nThough Ice Lake officially launched today, Intel has already sold more than 200,000 units to early customers. OEMs such as Cisco, HP Enterprise, Dell, and Lenovo were the first out of the gate to announce support for the new Xeon Scalable, and no doubt many more will as well.