Americas

  • United States

The InfiniBand primer continues

Opinion
Feb 16, 20063 mins
Data Center

* The InfiniBand fabric

InfiniBand’s promise lies not merely in its ability to deliver high performance, but to do so while using low-cost servers. Right now InfiniBand supports x86, x64 (AMD64 and Intel EM64T), and Itanium, running under Linux and Windows. It gets its high performance characteristics due to its ability to tie the cheap devices together in clusters and grids through a connective fabric that looks somewhat like a storage-area network fabric.

The InfiniBand fabric contains fiber optic- and copper-based connectors, and is likely to be a mix of host channel adapters for the CPUs and target channel adapters for peripheral devices, all of which connect through a switch and operate at 10Gbps.

Host channel adapters act like a combination of a network interface card (NIC) and a host bus adapter operating as an initiator (the concept of channel will sound very familiar to those of you who come from the mainframe world – it has the same meaning here as it does on the mainframe.) Host channel adapters sit on servers, functioning something like a TCP/IP transaction offload engine (TOE). The InfiniBand “channel” is the link between the host and target adapters.

A few points about transfer rates. First, be aware that the copper connections support the 10Gbps transfer rate. Next, be aware that when I say 10Gbps I err on the side of conservatism (a rare thing for someone from Massachusetts). Node-to-node communication can travel at twice that rate, and data traveling between switches can move as fast as 60Gbps.

Like Linux and high performance file systems, IB gets the most attention (and certainly has the highest visibility) in high performance computing (HPC) centers. Most vendors involved with the technology however, won’t see commercial success until it makes the leap into commercial computing.

In the commercial market, the competition is much stiffer, and pure performance as a purchasing metric is less significant than is the relationship between price and performance. The acceptance of IB outside the HPC market is likely to be determined by how its price/performance ratio compares with competing technologies, most particularly Gigabit Ethernet and Fibre Channel. Anyone doing those comparisons will of course have to factor in the costs associated with the purchase and management of products to deal with internal system communications and networking.

If you are still interested, here is some guidance on where to turn next.

First, who are the players in InfiniBand? Some you certainly know already, but some will likely be new names to you. The following is a mix of vendors, HPC sites and universities that are participating in InfiniBand development:

3 leaf, AMD, Ames Laboratory, Appro, Cisco, Cornell Theory Center, Data Direct, Dell, Emcor, Engenio, HP, IBM, Intel, Isilon, Lawrence Livermore National Laboratory, Linux Networx, Los Alamos National Laboratory, Mellanox, Microsoft, NCS, NetEffect, Network Appliance, Obsidian Research, Oracle, Panta, PathScale, Pittsburgh Supercomputing Center, Rackable Systems, Red Hat, Sandia National Laboratories, Silicon Graphics, SilverStore, Sun, System Fabric Works, Tyan, Veritas, Virtual Iron and Voltaire.

I think the best places to start if you want to look for more information on this topic are the Web sites of the two industry groups that are helping drive InfiniBand visibility and development. Look at the sites run by the InfiniBand Trade Association and by the Open InfiniBand Alliance.