Intel, Microsoft, Google, Facebook, HPE, Cisco, Dell-EMC, Huawei and Alibaba join forces to create Compute Express Link, a high-speed interconnect for chip-to-chip communication. Credit: Loops7 / Getty Images A group of big names in the data center space have linked arms to develop yet another high-speed interconnect, this one designed to connect processor chips. It’s called Compute Express Link, or CXL, which is aimed at plugging data-center CPUs into accelerator chips. Members of the alliance that developed the spec are Intel, Microsoft, Google, Facebook, HPE, Cisco, and Dell-EMC, plus Huawei and Alibaba. Where are IBM, AMD, Nvidia, Xilinx, or any of the ARM server vendors such as Marvell/Cavium? They have their own PCIe-nased spec, called CCIX. The group consists of AMD, Arm, Mellanox, Qualcomm, Xilinx, and Huawei. There’s also the OpenCAPI effort, led by IBM and includes OpenCAPI Consortium, founded in October 2016 by AMD, Google, IBM, Mellanox, Micron, Nvidia, HPE, Dell EMC, and Xilinx. So several of them are double-dipping, while everyone else seems to have chosen sides. Don’t you just love unity in technology? What is Compute Express Link (CXL)? The consortium describes the CXL technology as “maintaining memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance, reduced software stack complexity, and lower overall system cost. This permits users to simply focus on target workloads as opposed to the redundant memory management hardware in their accelerators,” Intel said in a statement. CXL is built on fifth-generation PCI Express physical and electrical protocols, giving CXL up to 128GB/s of transfer speed using x16 lanes. It has three interface protocols: an I/O protocol for sending commands and receiving status updates, a memory protocol that allows the host processors to efficiently share physical RAM with an accelerator, and a data coherency interface for resource sharing. What it’s basically doing is allowing CPUs, SOCs, GPUs, and FPGAs to talk directly and share memory. The way things work now is if a CPU wants to send contents to a FPGA, it has to go out through the Ethernet port, which is much slower, and pass through about a half dozen interfaces before the receiving chip gets it. So, CXL will allow for direct and fast chip-to-chip communication, which will be helpful as data centers get larger and larger. CXL has one advantage over CCIX and OpenCAPI. OpenCAPI and CCIX are balanced, meaning the transmitter and receiver have equal levels of complexity, and as transmission scales up, both sender and receiver increase in complexity. CXL operates asymmetrically, like USB, so all the heavy lifting is done on the processor side, where it should be. So CXL has the potential to be much more scalable. Founding member Intel noted the increase in specialized workloads like compression, encryption, and artificial intelligence (AI) have brought about an increased use in heterogeneous computing, where purpose-built accelerators are more often working side by side with general-purpose CPUs. “CXL creates a high-speed, low-latency interconnect between the CPU and workload accelerators, such as GPUs, FPGAs and networking. CXL maintains memory coherency between the devices, allowing resource sharing for higher performance, reduced software stack complexity and lower overall system cost,” said Navin Shenoy executive vice president and general manager of the Data Center Group at Intel, in a statement. Version 1.0 of the spec is due to be published on computeexpresslink.org, so it is not launching just yet. The member companies said they will launch products starting in 2021. Related content news Omdia: AI boosts server spending but unit sales still plunge A rush to build AI capacity using expensive coprocessors is jacking up the prices of servers, says research firm Omdia. By Andy Patrizio Dec 04, 2023 4 mins CPUs and Processors Generative AI Data Center news AWS and Nvidia partner on Project Ceiba, a GPU-powered AI supercomputer The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams. By Andy Patrizio Nov 30, 2023 3 mins CPUs and Processors Generative AI Supercomputers news VMware stung by defections and layoffs after Broadcom close Layoffs and executive departures are expected after an acquisition, but there's also concern about VMware customer retention. By Andy Patrizio Nov 30, 2023 3 mins Virtualization Data Center Industry news AI partly to blame for spike in data center costs Low vacancies and the cost of AI have driven up colocation fees by 15%, DatacenterHawk reports. By Andy Patrizio Nov 27, 2023 4 mins Generative AI Data Center Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe