Intel has announced a shift in strategy that impacts its XPU and data-center product roadmap.\nXPU is an effort by Intel to combine multiple pieces of silicon into one package. The plan was to combine CPU, GPU, networking, FPGA, and AI accelerator and use software to choose the best processor for the task at hand.\nThat\u2019s an ambitious project, and it looks like Intel is admitting that it can\u2019t do it, at least for now.\nJeff McVeigh, corporate vice president and general manager of the Super Compute Group at Intel, provided an update to the data-center processor roadmap that involves taking a few steps back. Its proposed combination CPU and GPU, code-named Falcon Shores, will now be a GPU chip only.\n\u201cA lot has changed in the past 12 months. Generative AI is transforming everything. And from our standpoint, from Intel\u2019s standpoint, we feel it is premature to be integrating the CPU and GPU for the next-generation product,\u201d McVeigh said during a press briefing at\u00a0the ISC High Performance Conference in Hamburg, Germany.\nThe former plan called for the CPU and GPU to be on the same development cycle, but the GPU could take longer to develop than the CPU, which would have meant the CPU technology would sit idle while the GPU was being developed. Intel decided that the dynamic nature of today's market dictates a need for discrete solutions.\n\u201cI'll admit it, I was wrong. We were moving too fast down the XPU path. We feel that this dynamic nature will be better served by having that flexibility at the platform level. And then we'll integrate when the time is right,\u201d McVeigh said.\nThe result is a significant change in Intel's roadmap.\nIntel in March scrapped a supercomputer GPU codenamed Rialto Bridge, which was to be the successor to the Max Series GPU, codenamed Ponte Vecchio, which is already on the market.\nThe new Falcon Shores chip, which is the successor to Ponte Vecchio, will now be a next-generation discrete GPU targeted at both high-performance computing and AI. It includes the AI processors, standard Ethernet switching, HBM3 memory, and I\/O at scale, and it is now due in 2025.\nMcVeigh said that Intel hasn\u2019t ruled out combining a CPU and GPU, but it's not the priority right now. \u201cWe will at the right time \u2026 when the window of weather is right, we\u2019ll do that. We just don\u2019t feel like it\u2019s right in this next generation.\u201d\nOther Intel news\nMcVeigh also talked up improvements to Intel\u2019s OneAPI toolkit, a family of compilers, libraries, and programming tools that can execute code on the Xeon, Falcon Shores GPU, and Gaudi AI processor. Write once, and the API can pick the best chip on which to execute. The latest update delivers speed gains for HPC applications with OpenMP GPU offload, extended support for OpenMP and Fortran, and accelerated AI and deep learning.\nOn the\u00a0supercomputer front,\u00a0Intel has delivered more than 10,624 compute nodes of Xeon Max Series chips with HBM for the Aurora supercomputer, which includes 21,248 CPU nodes, 63,744 GPUs, 10.9PB of DDR memory, and 230PB of storage. Aurora is being built at the Lawrence Livermore National Labs and will exceed 2 exaFLOPs of performance when complete. When operational, it's expected to dethrone Frontier as the fastest supercomputer in the world.\nIntel also discussed servers from Supermicro that seem to be aimed at taking on Nvidia\u2019s DGX AI systems. They feature eight Ponte Vecchio Max Series GPUs, each with 128 GB of HBM memory for more than 1 TB total of HBM memory per system. Not surprisingly, the servers are targeted at AI deployments. The product is expected to be broadly available in Q3.