Skip Links

PCI Express-based fabrics: A low-cost alternative to InfiniBand

By Larry Chisvin and Krishna Mallampati, PLX Technology, special to Network World
June 11, 2013 09:56 AM ET

Network World - This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

By building on the natural strengths of PCI Express (PCIe) -- it's everywhere, it's fast, it's low power, it's affordable -- and by adding some straightforward, standards-compliant extensions that address multi-host communication and I/O sharing capabilities, a universal interconnect now exists that substantially improves on the status quo in high-performance cloud and data center installations.

One application for these installations now receiving considerable attention is that of replacing small InfiniBand clusters with a PCIe-based alternative. This implementation approach for high-speed data center applications was addressed at the Super Computing 2012 Conference (SC12) in Salt Lake City, where the high-performance computing (HPC) community began to really sit up and take notice.

[ EPIC INTERCONNECT CLASH! InfiniBand vs. Gigabit Ethernet ]

The belief is that in cloud and data-center environments, PCIe-based fabrics can replace small InfiniBand clusters, offering Quad Data Rate (QDR)-like performance when communicating between CPUs, enabling straightforward sharing of I/O devices, and doing so at a much lower cost and power envelope. InfiniBand doesn't do this anywhere near as easily or cost-effectively. Figure 1 illustrates the simplicity of a PCIe-based fabric compared to InfiniBand.

PCIe vs. InfiniBand
Figure 1
PCIe-based fabrics enable clustering and I/O sharing with fewer parts.

InfiniBand predated PCIe and was originally envisioned as a unified fabric to replace most other data center interconnects. In the end, however, it did not achieve that goal, but did develop a niche as a high-speed clustering interconnect that replaced some proprietary solutions.

InfiniBand, like PCIe, has evolved considerably since its introduction. The initial speed supported was the Single Data Rate (SDR), 2Gbps, the same data rate as PCIe Gen 1. The original PCIe specification borrowed heavily from InfiniBand at the signaling level. It has been enhanced through Double Data Rate (DDR) at 4Gbps, QDR at 8Gbps, and is now shipping at Fourteen Data Rate (FDR) at 13.64Gbps. Higher speeds are envisioned moving forward.

The QDR data rate is closest to PCIe Gen 3. With a similar bandwidth and latency, a fabric based on PCIe should provide similar performance to that of an InfiniBand solution at the same data rate. This is especially true if the PCIe fabric includes enhancements to the basic PCIe capability to enable Remote DMA (RDMA), which offers very low-latency host-to-host transfers by copying the information directly between the host application memories.

And PCIe can also allow sharing of I/O devices using standard multifunction networking and telecommunications hardware and software, something InfiniBand can't easily do.

The native sharing of I/O devices, and the ability to enable high-speed communication between the CPUs in a system, is not part of the current specification for PCIe. However, that specification does provide a mechanism for vendors to add their own extensions, while still remaining compatible with the overall specification. Using these vendor-defined extensions allows the enhanced implementation to be used with existing test and analysis equipment, but with a more robust feature set.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News