This Week in NW
InfiniBand offers data center boost
One challenge faced in today's data center is management of multiple interconnect standards. Current data center servers support as many as three different interconnects: an Ethernet or proprietary connection for interprocess communication, Fibre Channel for connection to a storage-area network and Ethernet for connectivity to a LAN/WAN infrastructure.
InfiniBand technology enables the consolidation of these interconnects into a cluster and lets expensive I/O resources be shared and scaled independently of CPU resources. The result is a data center that is easier to manage and more cost-effective.
Using techniques borrowed from the network and high-end computing worlds, InfiniBand is a channel-based, point-to-point switched technology that promises to alter how servers, storage and networking devices in a data center are interconnected.
The InfiniBand architecture defines 1X, 4X and 12X link speeds, delivering bandwidths of 2.5G, 10G and 30G bit/sec, respectively, with provisions in the specification for even higher data rates.
The performance advantages of these high-wire speeds are amplified further with the use of next-generation Remote Data Memory Access (RDMA) protocols that let data be transferred more directly to an intended location, thus avoiding the multiple buffer copies that are common with today's protocols and interconnect schemes.
InfiniBand can be physically implemented across a printed-circuit-board backplane for in-chassis applications, across copper cables to distances up to 56 feet or across fiber-optic cables for distances up to 990 feet.
How it works
Subscribe to the Tech Update newsletter
Here is a weekly newsletter to help you stay abreast of new networking standards and technologies by providing down-to-earth explanations of how they work.
InfiniBand repeaters are available to enable even longer-distance applications.
An InfiniBand implementation consists of three primary hardware component types:
Host channel adapters (HCA) connect servers to the InfiniBand fabric and initially will be delivered as add-in cards that plug into PCI-X slots in the servers.
HCA devices will eventually be integrated directly onto the server motherboards.
Target channel adapters (TCA) provide connectivity to I/O resources such as Fibre Channel or Gigabit Ethernet targets. These targets can be implemented as stand-alone rack-mountable devices or as part of an InfiniBand I/O chassis that can contain multiple InfiniBand TCAs.
All the HCAs and TCAs are connected through InfiniBand switches that create the InfiniBand fabric.
One key attribute of the InfiniBand architecture is built-in support for reliability, availability and serviceability features. With the implementation of redundant links and switches in a configuration, InfiniBand management software can dynamically identify failing nodes or links and quickly reroute traffic, significantly increasing overall system reliability and availability.
InfiniBand also supports hot-swapping of nodes, letting managers dynamically add or remove a new server, switch or I/O node without disrupting service.
The key to InfiniBand will be the software that is being developed to support the technology. In addition to the powerful management software, next-generation I/O protocols are being deployed in support of InfiniBand.
These include the Sockets Direct Protocol that will be utilized for interprocess communication or clustering applications, the SCSI RDMA protocol for communication with Fibre Channel devices and SANs, and IP over InfiniBand and Remote Network Driver Interface Specification protocols for delivering Ethernet traffic over InfiniBand.
Led by steering committee members Compaq, Dell, IBM, Intel, Hewlett-Packard, Microsoft and Sun, the InfiniBand Trade Association (www.Infinibandta.org) published the first InfiniBand architecture specification in October 2000. Since then, a number of vendors have been hard at work delivering silicon, software and systems that implement the new technology.
With early products available now from multiple vendors, and product tests and trials under way at several large IT shops across the country, momentum for this next-generation I/O technology is building.
Bixler is product manager at Banderacom. He can be reached at email@example.com.