Americas

  • United States

InfiniBand: Back to the beginning

Opinion
Feb 14, 20063 mins
Servers

* A little history of InfiniBand

Some of my readers may remember efforts during the 1990s by Compaq, HP and IBM to deliver a high-speed serial connection technology called Future I/O. Some may also recall a competing technology – Next Generation I/O (NGIO) – from a group consisting of Intel, Microsoft, and Sun. Eventually the two camps merged their efforts to work on what all commonly saw as the next generation of technology for connecting servers and storage.

Originally, that combined effort was called System I/O, but that name didn’t last long. We now know it as InfiniBand.

This week we will take a high level overview of InfiniBand. For those of you who want delve deeper into this topic, I’ll make sure to include some useful pointers and a list of leading players in this week’s second newsletter.

InfiniBand aims to be many things to many users, but it is probably best summed up as a high-performance, highly scalable, low latency and highly reliable network for the computer center. Originally intended to be a replacement for Gigabit Ethernet and Fibre Channel, it is in a sense a SAN, but it is also something more: it is a SAN where the “S” refers to “storage” and to “system,” and where the “N” may well be a network for all sorts of communications. That is, in addition to linking storage to CPUs, it also links CPUs to one another.

Storage, servers and networking all connect and via a single switch-based fabric with InfiniBand, and may be managed centrally. Because everything converges in this environment, InfiniBand offers the possibility of managing processing, communications, and storage as a single linked entity. A logical result of this is likely to be improved QoS and, one would hope, a simplified management environment.

One result we have already seen is that InfiniBand currently finds lots of use as the interconnect technology for clusters and grids. Right now, for example, high performance computing (HPC) sites are using it to link the processing power of interconnected CPUs in order to scale performance. At the same time, InfiniBand is capable of ramping up I/O when needed to support increased CPU performance; on the other hand it is also capable of effecting economies by sharing the I/O resources of one cluster member with the other cluster nodes.

InfiniBand is not limited to HPC sites however; as John from Arizona pointed out last week in an e-mail. It also has a growing footprint at commercial sites, where its ability to centralize the management of virtualized storage, servers and networks may turn the fundamental tasks involved in both build out and just-in-time service delivery into a dramatically simpler operation.

More next time.