A group of big names in the data center space have linked arms to develop yet another high-speed interconnect, this one designed to connect processor chips.\nIt's called Compute Express Link, or CXL, which is aimed at plugging data-center CPUs into accelerator chips. Members of the alliance that developed the spec are Intel, Microsoft, Google, Facebook, HPE, Cisco, and Dell-EMC, plus Huawei and Alibaba.\n\nWhere are IBM, AMD, Nvidia, Xilinx, or any of the ARM server vendors such as Marvell\/Cavium? They have their own PCIe-nased spec, called CCIX. The group consists of AMD, Arm, Mellanox, Qualcomm, Xilinx, and Huawei.\nThere\u2019s also the OpenCAPI effort, led by IBM and includes OpenCAPI Consortium, founded in October 2016 by AMD, Google, IBM, Mellanox, Micron, Nvidia, HPE, Dell EMC, and Xilinx. So several of them are double-dipping, while everyone else seems to have chosen sides. Don\u2019t you just love unity in technology?\nWhat is\u00a0Compute Express Link (CXL)?\nThe consortium describes the CXL technology as \u201cmaintaining memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance, reduced software stack complexity, and lower overall system cost. This permits users to simply focus on target workloads as opposed to the redundant memory management hardware in their accelerators,\u201d Intel said in a statement.\nCXL is built on fifth-generation PCI Express physical and electrical protocols, giving CXL up to 128GB\/s of transfer speed using x16 lanes. It has three interface protocols: an I\/O protocol for sending commands and receiving status updates, a memory protocol that allows the host processors to efficiently share physical RAM with an accelerator, and a data coherency interface for resource sharing.\nWhat it\u2019s basically doing is allowing CPUs, SOCs, GPUs, and FPGAs to talk directly and share memory. The way things work now is if a CPU wants to send contents to a FPGA, it has to go out through the Ethernet port, which is much slower, and pass through about a half dozen interfaces before the receiving chip gets it. So, CXL will allow for direct and fast chip-to-chip communication, which will be helpful as data centers get larger and larger.\nCXL has one advantage over CCIX and OpenCAPI. OpenCAPI and CCIX are balanced, meaning the transmitter and receiver have equal levels of complexity, and as transmission scales up, both sender and receiver increase in complexity. CXL operates asymmetrically, like USB, so all the heavy lifting is done on the processor side, where it should be. So CXL has the potential to be much more scalable.\nFounding member Intel noted the increase in specialized workloads like compression, encryption, and artificial intelligence (AI) have brought about an increased use in heterogeneous computing, where purpose-built accelerators are more often working side by side with general-purpose CPUs.\n\u201cCXL creates a high-speed, low-latency interconnect between the CPU and workload accelerators, such as GPUs, FPGAs and networking. CXL maintains memory coherency between the devices, allowing resource sharing for higher performance, reduced software stack complexity and lower overall system cost,\u201d said Navin Shenoy executive vice president and general manager of the Data Center Group at Intel, in a statement.\nVersion 1.0 of the spec is due to be published on computeexpresslink.org, so it is not launching just yet. The member companies said they will launch products starting in 2021.