What's the biggest, fastest LAN switch?

Ethernet switch vendors such as 3Com, Force10, Cisco, Extreme, Foundry and HP ProCurve constantly tussle with claims of the highest performance, density and latency. But keep in mind that what's available right now from such vendors is three-year-old technology, on average. Meanwhile, a host of hungry start-ups such as Raptor Networks and Woven Systems have a new take on how to build the 'biggest' Ethernet switch.

This isn't a trick question, but one with a lot of tricky answers depending on how you define "big" and "fast."

Ethernet switch vendors such as 3Com, Force10, Cisco, Extreme, Foundry and HP ProCurve constantly tussle with claims of the highest performance, density and latency. But keep in mind that what's available right now from such vendors is three-year-old technology, on average. Meanwhile, a host of hungry start-ups such as Raptor Networks and Woven Systems have a new take on how to build the "biggest" Ethernet switch. Their approach diverges from single big-iron chassis, and more resembles clustered supercomputing, or InfiniBand networking topologies.

What do you think? Discuss in our Biggest, fastest forum.

How fast Ethernet can go is bound by the current 802.3ae standard - 10Gbps - so no single port is speedier than that, supposedly. Other ways to measure switch heftiness are by the bandwidth of the switch fabric and the density of ports that the chassis or box supports. Then there's the performance of the ports themselves. Latency - how long a switch holds onto a packet - are factors in switch performance, as well as jitter, which is a measure of the amount of variance of latency.

"Everybody does line rate on a per-port basis," says David Newman, president of Network Test and a member of Network World's Test Alliance. "The question then becomes how many ports do you do line rate on before you start dropping packets?"

In terms of published specifications, among the biggest of the core enterprise switches are Force10's E1200, Foundry's RX series, Cisco's Catalyst 6500, and Extreme's BlackDiamond. Comparing published specs, Foundry's RX-16 switch is the highest-capacity switch; it can run 64 10G Ethernet ports at full speed, and up to 192 10G ports in a chassis in an oversubscribed configuration (where the sum bandwidth on all ports exceeds the switches capacity). Force10's E1200 TeraScale switch can run 56 10G ports, or up to 224 10G ports when oversubscribed. Extreme's BlackDiamond 10808 chassis can support 48 non-blocking 10G ports. Cisco's Catalyst 6513 can handle 32 10G Ethernet connections all running at full-duplex.

Some say that what's less important is how a switch handles variables such as jitter and packet loss when the switch is running full blast, as opposed to how the vendors carve up per-slot and overall system bandwidth. "What I think is a more useful metric than throughput is latency." Newman says. "There, I'd say clearly Cisco is the best."

Newman says he has clocked a Cisco Catalyst 4948 at around 3 microseconds at 10G rates, "which is the lowest I've measured," he adds. "Force10 was in the low double digits [in microseconds of delay]. They used to be hundreds of times higher, which would mean thousands of packets outstanding. But they've fixed it some over time."

With Force10's newest switch offering - the S-series, which Newman says he has not yet tested - the company claims it delivers latency numbers in the 200 to 300 nanosecond range - several orders of magnitude faster than 3 microseconds. (This delivery claim is based on a test of the product conducted by network testing firm The Tolly Group and sponsored by Force10.)

Lawrence Berkeley National Laboratory, a U.S. Department of Energy research lab in Berkeley, Calif., uses both Force10 and Cisco switches in its data center and LAN core. Putting a finger on which of the two products is "fastest" or "best-performing" is difficult, says Mike Bennett, LBNL senior network engineer for the LBLnet Services Group, since the switches are used in different applications.

Matters of scale

Maximum nonblocking 10G Ethernet ports supported per chassis by vendor:
Foundry BigIron RX-16: 64
Force10 ER1200: 56
Extreme BlackDiamond 10K: 48
Cisco Catalyst 6500: 32

SOURCE: VENDORS' PRODUCT DATA SHEETS

"[I've] tested a 6500 with 2 ports of 10G and the E1200 with 2 ports of 10G - and neither of them are over-subscribed, and neither drops packets," says Bennett. "So it's not that one's faster than the other. It's just that they both work as advertised."

When large amounts of 10G ports are needed, Bennett uses the Force10 E1200 for its higher 10G port density. "Typically when you buy Cisco, you buy the kitchen sink when it comes to features," says Bennett. "Force10 is different in that they don’t have everything in a particular version of an operating system." This is preferable for applications at the lab that require a high-density, non-block switch that simply moves packets fast. "We try to go with the simplest fastest solution in order to minimize the number of variables to keep operating costs low."

Across town at Lawrence Livermore National Laboratory, also a DOE lab, the network team is looking into next-generation switch architectures that will move the Ethernet network to a level of bandwidth and latency found in storage-area networks.

"We're looking at the next-generation machines and everything is going to go up by a factor of 10," says Dave Wiltzius, network division leader at the lab.

"Everything will be 10G. So we're looking for a switch, or switch fabric that can give us on the order of 2,000 10G ports… We're basically interested in building a federated switch environment using fat tree topologies and things like that."

This "fat tree" topology Wiltzius envisions involves a meshed non-blocking switching architecture modeled somewhat after the traditional public telephone network, where switches are simple devices with a few connectivity ports interconnected via multiple paths. What's more they effectively utilize the bandwidth.

Ethernet doesn’t yet do this - or do it very well, at least.

One technique Wiltzius already uses to somewhat achieve this fat-tree effect are port aggregation, or Layer 2 "hashing," where multiple Gigabit or 10G Ethernet links are bonded into a larger virtual pipe. Tying switches together, or servers to switches, with hashed Ethernet pipes gives a larger virtual throughput, but this linking is limited to eight ports (up to 80Gbps with eight hashed 10G links). This method uses an algorithm that randomly sends packets down the bonded connections. "With hashing, you get an uneven distribution, because of the random nature of the algorithm, which doesn't necessarily offer the best utilization of the bandwidth."

Some start-ups looking to address the limitations of switching include Arastra, a closely-held router company based in Palo Alto, and Woven Systems, Santa Clara-based, which is in semi-stealth mode, developing an Ethernet-based mesh network product.

"What we're trying to do is deliver the best features of Fibre Channel/InfiniBand on a 10G Ethernet fabric," says Harry Quackenboss, president and CEO of Woven.

The approach Woven is taking is similar to the trend of grid or distributed, clustered computing, where large, symmetric multi-processor (SMP) servers are being replaced by single- or dual-processor nodes coupled together over a network.

"The same thing is going to happen to LAN switching in the data center, with respect to scale out," Quackenboss says. "The big [LAN] switches are expensive, and the biggest non-blocking switch you can buy for data center applications is a 64-port Foundry system."

Woven is working on Layer 2 Gigabit and 10G Ethernet data center switches that use special algorithms which allow the boxes to emulate InfiniBand or Fibre Channel networks in some ways. Multiple paths can be established among switches in the fabric, allowing bandwidth to be allocated more dynamically over the paths, since traffic lanes are not shut down, as in spanning tree-based Ethernet, Quackenboss says.

"If you want to build out a network of more than two switches, you can use link aggregation or trunking to bond groups of Ethernet segments," Quackenboss says. "But if you want to put three or more switches in a network, one switch becomes the bottleneck." Layer 3 switching, and protocols such as OSPF and ECMP, can be used to create multi-path networks, but these methods add cost. Layer 3 switch ports cost, on average, five times as much as Layer 2 ports, according to IDC.

Plugging servers into multiple ports on different devices in a fabric of switches would also make server reconfiguring easier, he says.

"Data center managers would like to be able to dynamically reconfigure applications and servers without physically recalling them," Quackenboss says. Leveling connectivity in the data center to multi-path, Layer 2 Ethernet would help achieve this. The idea is somewhat analogous to the Layer 2 Metro Ethernet technologies being developed by carrier gear makers.

"In a sense, the modern data centers have enough servers in them that they resemble a collapsed a metro-area network into one room," he says.

Another company already with products available is Raptor Networks, which makes fixed-configuration Gigabit and 10G Ethernet switches that connect to each other to form a meshed fabric. Rather than focus on high-density data centers, Raptor's gear is aimed at LAN backbone and aggregation of wiring closet traffic. "We've created the ability to do at L2 what everyone else must go to Layer 3 to do," says CEO Tom Wittenschlager.

The three-year-old company makes low-cost, fixed-configuration 10G Ethernet switches with 24 Gigabit and six 10G Ethernet ports and 160Gbps of total switching bandwidth. The single-rack-unit boxes support Layer 2-4 switching and run a proprietary modification of 10G Ethernet, which allow the devices to be hooked together in a multi-path mesh at Layer 2 without using the spanning tree protocol. Instead, the switches connect with the Raptor Adaptive Switch Technology (RAST), a protocol that binds the switches in a way similar to how modules in a chassis switch are hooked to the backplane or switch fabric. Citing internal company test data, Wittenschlager says the technology can move packets through a mesh of four Raptor switches - passing a packet in and out of a 10G Ethernet port eight times among four boxes - with 6.48 microseconds of latency.

"This creates the effect of each Raptor switch acting like a blade in a module, which allows traffic to travel among the switches very fast with low latency," he says. "To achieve this, routing information is inserted into unused header space in a standard Ethernet frame, which gives delivers switch heartbeat and route path data among Raptor switches in a cluster, according to Wittenschlager.

"We've created the ability for physically separate blades [the raptor switches] to communicate on a common back plane, as if they were all inside one chassis. It's really one virtual switch, with blades that can be sitting up to 80 kilometers apart" when connected via 10G Ethernet over single-mode long-haul fiber.

Non-raptor switches connected to 10G or Gigabit ports see the switch as a single large LAN switch and can connect as simple Ethernet without added configuration, he says.

A mesh of four Raptor boxes recently replaced a core of two Catalyst 6509 switches in the network of L.A. Care, the healthcare management firm for the Los Angeles County employees.

The Raptor boxes were deployed to segment the company's flat Layer 2 LAN into VLAN subnets, keeping it at Layer 2, with 10G Ethernet in the core. Three 10G Ethernet pipes connect each box in the mesh; the Catalyst switches have been moved to the LAN edge for connecting the organization's 350 end users, and other devices. Servers are plugged into the Raptor core on non-RAST Gigabit Ethernet ports.

After solving some initial spanning tree loop issues between the IOS-based Cisco routers and the RAST-based Raptor switches, the network is running "smooth and very fast" says Rayne Johnson, director of IT and security at L.A. Care. The Raptor product cost around $180,000 to install, while Cisco quoted Johnson at around $500,000 to upgrade the core with 10G Ethernet and VLAN capabilities. "I usually don't get myself involved with a product" in its initial development, Johnson says. "But it was worth it. In the end, you can't beat the price."

Learn more about this topic

 
LAN Burning Questions
 

What's the biggest, fastest LAN switch?

Will Wi-Fi kill wired Ethernet at the LAN edge?

Are Cisco switches really so expensive?

Why doesn't HP spin off its ProCurve business?

How much can a LAN switch really protect your network?

Wasn't Dell supposed to dominate the LAN market by now?

Insider Shootout: Best security tools for small business
Join the discussion
Be the first to comment on this article. Our Commenting Policies