- Silicon Valley's 19 Coolest Places to Work
- Is Windows 8 Development Worth the Trouble?
- 8 Books Every IT Leader Should Read This Year
- 10 Hot Hadoop Startups to Watch
IDG News Service - A few years back, picking the protocol to link your computers together into a network was a no-brainer. The servers in a mid-sized data center were wired together using Ethernet. And if you wanted to connect many nodes into a single high performance computer (HPC), you went with InfiniBand.
BACKGROUND: High-speed Ethernet planning guide
These days, the choice is blurrier. The two protocols are encroaching on each other's turf, engaging in showdowns for the honor of networking the larger data centers. The latest incarnations of Ethernet, Gigabit Ethernet, is perfectly capable of supporting larger HPC systems, while InfiniBand is being increasingly used in the performance sensitive enterprise data center.
|Apple iOS vs. Google Android: It comes down to security|
|Tablet smackdown: iPad vs Surface RT in the enterprise|
|Cisco, VMware and OpenFlow fragment SDNs|
|Cloud computing showdown: Amazon vs. Rackspace (OpenStack) vs. Microsoft vs. Google|
|Cisco Catalyst 6500 vs. Cisco Nexus 700|
One rumble to watch is Top500, the twice-annual compilation of the world's fastest supercomputers. In the latest compilation, released in November, InfiniBand served as the primary interconnect for 226 of the top 500 systems. Gigabit Ethernet was used on 188 systems.
A grounding in performance stats is always helpful to enjoy any epic battle: Today, for network aggregation points, there is 100 Gigabit Ethernet, in which each port of a 100 Gigabit Ethernet card can transfer data at 100Gbps. Less expensive 1, 10 and 40 Gigabit Ethernet network cards are also available for servers and switches. Answering our insatiable need for ever more bandwidth, the Ethernet Alliance has commenced working on 400 Gigabit Ethernet.
The current version of InfiniBand, FDR (Fourteen Data Rate), offers 56Gbps (or 14Gbps per channel, hence the FDR title). The next generation, EDR (Enhanced Data Rate), arriving next year, will offer 100Gbps.
But the numbers tell only part of the story. InfiniBand offers advantages such as a flatter topology, less intrusion on the server processor and lower latency. And Ethernet offers near ubiquity across the market for networking gear.
The power of Ethernet is that it is everywhere, from laptops to the largest data center switches, says Ethernet Alliance Chairman John D'Ambrosia. "There are a multitude of [companies] providing Ethernet solutions. You have a common IP that goes across multiple applications," he says.
Such ubiquity ensures interoperability as well as the lowest costs possible from a large pack of fiercely competing vendors. "You buy something, plug it in and, guess what? It just works. People expect that with Ethernet," D'Ambrosia says. "You can start putting things together. You can mix and match. You get competition and cost-reductions."
InfiniBand was introduced in 2000 as a way to tie memory and processors of multiple servers together so tightly that communications among them would be as if they were on the same printed circuit board. To do this, InfiniBand is architecturally sacrilegious, combining the bottom four layers of the OSI (Open Systems Interconnection) networking stack -- the physical, data link, network and transport layers -- into a single architecture.