Expectations for InfiniBand

* InfiniBand making a comeback

InfiniBand, once almost given up for dead by some unbelievers, has been quietly making all sorts of progress in vendor test labs, in high performance computing environments and, increasingly, on the floors of commercial IT rooms. If you haven’t been following the technology for a while, it likely has changed quite a bit since you last took a look.

The expectation for InfiniBand is that it will provide a high performance, moderate cost, high bandwidth and low latency environment for data transport. Since the last time we took up this subject, InfiniBand has made progress in a number of areas.

HP has announced the world’s fastest 20Gbps InfiniBand blade server. Sun has launched a blade server that is hot-pluggable. Just as importantly, The OpenFabrics Alliance (a group helping drive an open source Linux-based InfiniBand software stack) has announced support for an open source 10Gbps Ethernet Remote Direct Memory Access (RDMA) software stack to complement the native InfiniBand protocol.

RDMA requires no CPU cycles to move information from one part of memory to another – that job is deployed to the network adapters, freeing up the CPU for computational tasks required by the applications. The Ethernet support for RDMA should provide a low-cost stepping stone for the industry that will perhaps make a final move to InfiniBand easier. At least it offers the prospect of eliminating forklift upgrades when the time is right.

InfiniBand has typically been thought of as a clustering protocol best applied within the walls of the IT center. This is changing, however, as InfiniBand is now able to reach outside the data center. Dan, a correspondent from Mellanox (Mellanox is a major player in the InfiniBand space) pointed me towards a company named Obsidian Research, a switch builder offering a device that allows data centers to merge remote InfiniBand fabrics into a unified network. Their switch connects InfiniBand clusters to one another using a choice of several WAN connections or 10G Ethernet over dedicated optical networks.

Mellanox and the other InfiniBand companies push the performance envelope, and engineering and scientific high performance computing (HPC) centers have been glad to take advantage of what this has to offer. Increasingly, so has Wall Street. Why? Because at least one financial house estimates that they will make $100 million per year for each millisecond that a trading application can beat the market. The stakes are high.

Storage companies currently shipping InfiniBand storage products include HP, IBM, Isilon, LSI and Network Appliance. Expect the industry to start bumping up the speed of InfiniBand to 40Gbps in the near future.

Join the discussion
Be the first to comment on this article. Our Commenting Policies