Americas

  • United States

Data center giants announce new high-speed interconnect

News Analysis
Mar 12, 20193 mins
Data CenterNetworking

Intel, Microsoft, Google, Facebook, HPE, Cisco, Dell-EMC, Huawei and Alibaba join forces to create Compute Express Link, a high-speed interconnect for chip-to-chip communication.

Binary stream passing over rows of monitors, each also displaying binary streams.
Credit: Loops7 / Getty Images

A group of big names in the data center space have linked arms to develop yet another high-speed interconnect, this one designed to connect processor chips.

It’s called Compute Express Link, or CXL, which is aimed at plugging data-center CPUs into accelerator chips. Members of the alliance that developed the spec are Intel, Microsoft, Google, Facebook, HPE, Cisco, and Dell-EMC, plus Huawei and Alibaba.

Where are IBM, AMD, Nvidia, Xilinx, or any of the ARM server vendors such as Marvell/Cavium? They have their own PCIe-nased spec, called CCIX. The group consists of AMD, Arm, Mellanox, Qualcomm, Xilinx, and Huawei.

There’s also the OpenCAPI effort, led by IBM and includes OpenCAPI Consortium, founded in October 2016 by AMD, Google, IBM, Mellanox, Micron, Nvidia, HPE, Dell EMC, and Xilinx. So several of them are double-dipping, while everyone else seems to have chosen sides. Don’t you just love unity in technology?

The consortium describes the CXL technology as “maintaining memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance, reduced software stack complexity, and lower overall system cost. This permits users to simply focus on target workloads as opposed to the redundant memory management hardware in their accelerators,” Intel said in a statement.

CXL is built on fifth-generation PCI Express physical and electrical protocols, giving CXL up to 128GB/s of transfer speed using x16 lanes. It has three interface protocols: an I/O protocol for sending commands and receiving status updates, a memory protocol that allows the host processors to efficiently share physical RAM with an accelerator, and a data coherency interface for resource sharing.

What it’s basically doing is allowing CPUs, SOCs, GPUs, and FPGAs to talk directly and share memory. The way things work now is if a CPU wants to send contents to a FPGA, it has to go out through the Ethernet port, which is much slower, and pass through about a half dozen interfaces before the receiving chip gets it. So, CXL will allow for direct and fast chip-to-chip communication, which will be helpful as data centers get larger and larger.

CXL has one advantage over CCIX and OpenCAPI. OpenCAPI and CCIX are balanced, meaning the transmitter and receiver have equal levels of complexity, and as transmission scales up, both sender and receiver increase in complexity. CXL operates asymmetrically, like USB, so all the heavy lifting is done on the processor side, where it should be. So CXL has the potential to be much more scalable.

Founding member Intel noted the increase in specialized workloads like compression, encryption, and artificial intelligence (AI) have brought about an increased use in heterogeneous computing, where purpose-built accelerators are more often working side by side with general-purpose CPUs.

“CXL creates a high-speed, low-latency interconnect between the CPU and workload accelerators, such as GPUs, FPGAs and networking. CXL maintains memory coherency between the devices, allowing resource sharing for higher performance, reduced software stack complexity and lower overall system cost,” said Navin Shenoy executive vice president and general manager of the Data Center Group at Intel, in a statement.

Version 1.0 of the spec is due to be published on computeexpresslink.org, so it is not launching just yet. The member companies said they will launch products starting in 2021.

Andy Patrizio is a freelance journalist based in southern California who has covered the computer industry for 20 years and has built every x86 PC he’s ever owned, laptops not included.

The opinions expressed in this blog are those of the author and do not necessarily represent those of ITworld, Network World, its parent, subsidiary or affiliated companies.

More from this author