Nvidia kicked off its GPU Technology Conference (GTC) 2021 with a bang: A new CPU for high performance computing (HPC) clients--its first-ever data-center CPU--called Grace.\nBased on the Arm Neoverse architecture, NVIDIA claims Grace will serve up to 10-times better performance than the fastest servers currently on the market for complex artificial intelligence and HPC workloads.\nBut that\u2019s comparing then and now. Grace won\u2019t ship until 2023, and in those two years competitors will undoubtedly up their game, too. But no one has ever accused CEO Jen-Hsun Huang of being subdued.\nNvidia made a point that Grace is not intended to compete head-to-head against Intel's Xeon and AMD's EPYC processors. Instead, Grace is more of a niche product, in that it is designed specifically to be tightly coupled with NVIDIA's GPUs to remove bottlenecks for complex AI and HPC applications.\nNvidia is in the process of acquiring Arm Holdings, a deal that should close later this year if all objections are overcome.\n"Leading-edge AI and data science are pushing today\u2019s computer architecture beyond its limits\u2014processing unthinkable amounts of data," said Huang. "Using licensed Arm IP, Nvidia has designed Grace as a CPU specifically for giant-scale AI and HPC. Coupled with the GPU and DPU, Grace gives us the third foundational technology for computing, and the ability to re-architect the data center to advance AI. Nvidia is now a three-chip company."\nNvidia does have server offerings, the DGX series, which use AMD Epyc CPUs (you didn\u2019t think they were going to use Intel, did you?) to boot and coordinate everything and coordinate the Ampere GPUs. Epyc is great for running databases, but it\u2019s a general compute processor, lacking the kind of high-speed I\/O and deep learning optimizations that Nvidia needs.\nNvidia didn\u2019t give a lot of detail, except to say it would be built on a future version of the Arm Neoverse core using a 5-nanometer manufacturing process, which means it will be built by TSMC. Grace will also use Nvidia\u2019s homegrown NVLink high-speed interconnect between the CPU and GPU. A new version planed for 2023 will offer over 900GBps of bandwidth between the CPU and GPU. That\u2019s much faster than the PCI Express used by AMD for CPU-GPU communications.\nTwo supercomputing customers\nEven though Grace isn\u2019t shipping until 2023, Nvidia already has two supercomputer customers for the processor. The Swiss National Supercomputing Centre (CSCS) and Los Alamos National Laboratory announced today that they\u2019ll be ordering supercomputers based on Grace. Both systems will be built by HPE\u2019s Cray subsidiary (who else?) and are set to come online in 2023.\nCSCS\u2019s system, called Alps, will be replacing their current Piz Daint system, a Xeon and NVIDIA P100 cluster. CSCS claims Alps will offer 20 ExaFLOPS of AI performance, which would be incredible if they deliver, because right now the best we have is Japan\u2019s Fugaku at just one exaflop.\nArm\u2019s stumbles in the data center\nOverall, this is a smart move on Nvidia\u2019s part because general purpose Arm server processors have not done well. Nvidia has its own failure data center CPU market. A decade ago it launched Project Denver, but it never got out of the labs. Denver was a general purpose CPU, whereas Grace is highly vertical and specialized.