• United States

Testing 10Gig Ethernet switches

Feb 03, 200315 mins
Network SwitchesNetworkingRouters

Mixed results: Only Force10 delivers 10G bit/sec throughput, but all switches boast impressive features.

In Network World’s first hands-on assessment of the new 10G Ethernet switches, we put boxes from five major vendors through a comprehensive set of performance tests – both 1 and 10 Gigabit flavors of Ethernet. Avaya, Force10 Networks, Foundry Networks, HP and Nortel accepted our challenge.

Lab tests prove that most first-generation 10G Ethernet switches don’t deliver anywhere close to 10 gigabits of throughput. But the latest backbone switches  do deliver more bandwidth than earlier gear that used link aggregation, and they do a better job of quality-of-service enforcement.

In Network World’s first hands-on assessment of the new 10G Ethernet switches, we put boxes from five major vendors through a comprehensive set of performance tests – both 1 and 10 Gigabit flavors of Ethernet. Avaya, Force10 Networks, Foundry Networks, HP and Nortel accepted our challenge. Other major players went missing, citing various reasons (see “No shows” ).

QoS sanity check

No shows

10 Gigabit switches feature list (Excel)

Forum: Your thoughts on the tests

How we did it


Archive of Network World reviews

Subscribe to the Product Review newsletter

Hardware gremlins plagued Nortel’s devices, and we couldn’t obtain valid results. For the remaining players, the results offer limited encouragement:

•  Force10’s E1200 delivers true 10G bit/sec throughput with any frame size, a performance that earned it the Network World Blue Ribbon award.

•  Foundry’s FastIron 400 and HP’s ProCurve Routing Switch 9300m series (which HP buys from Foundry) achieved fast failover times.

•  Avaya’s Cajun P882 MultiService Switch kept jitter to a minimum and dropped no high-priority packets in our QoS tests.

But, when all is said and done, none of these first-generation devices represent the perfect switch. Force10’s E1200 aced the throughput tests, but its delay and jitter numbers are far higher than they should be. As for the others, they won’t really be true 10 Gigabit devices until they get capacity upgrades.

While the 10 Gigabit performance results are disappointing, it’s important to put those numbers in context. Few, if any, users are planning pure 10G Ethernet networks, so these devices support a variety of interfaces and other features useful for enterprise core networking, such as support for highly redundant components and multiple device management methods (see full feature listing – Excel file ). It’s important to note that these switches did a pretty good job of handling tasks not directly related to 10G Ethernet, such as failover and QoS  enforcement.

To the tests

We evaluated switch performance with four sets of tests: 10 Gigabit alone; Gigabit Ethernet across a 10G Ethernet backbone; failover times; and QoS enforcement.

The main goal of our pure 10G Ethernet tests was to describe the basic forwarding and delay characteristics of the new technology. Would it really go at 10 gigabits? And how much delay and jitter would the new interfaces incur at that speed?

To answer these questions, we set up a test bed comprising a single switch equipped with 10G Ethernet interfaces and SmartBits traffic generator/analyzers from Spirent Communications (see How we did it ). All vendors supplied four 10G Ethernet interfaces for this event except Avaya, which supplied two.

We configured the SmartBits to offer traffic from more than 2,000 virtual hosts (more than 1,000 hosts in Avaya’s case), representing the large number of devices attached to a typical 10 Gigabit switch.

We used three frame sizes: 64-byte frames, because they’re the shortest allowed in Ethernet, and as such offer the most stressful test case; 256-byte frames, because they’re close to the median frame length of around 300 bytes as observed on various Internet links; and 1,518 bytes, the maximum allowed in Ethernet and the size used in bulk data transfers.

Only one switch — Force10’s E1200 — actually delivered true line-rate throughput (see Figure 1 ). Impressively, the E1200 moved traffic at line rate with short, medium and long frames. In all our baseline tests, the E1200 did not drop a single frame.

Avaya, Foundry and HP boxes moved traffic at roughly 80% of line rate. Avaya and Foundry representatives on-site for testing said switch fabrics that topped out at 8G bit/sec limited their devices, and that’s generally consistent with the frame rates these switches achieved.

In the best case, Foundry moved traffic at 86% of line rate when handling 64-byte frames, a result Foundry explained by saying its switch fabric actually has a bit more than 8G bit/sec of capacity.

Maybe so, but in tests with four interfaces Foundry’s throughput with 256- and 1,518-byte frames was only about 5.5G and 5G bit/sec, respectively. Curiously, the HP switch achieved throughput close to 8G bit/sec per interface for all frame lengths, even though Foundry manufactures both vendors’ switches. One possible explanation is that Foundry and HP supplied different software versions for testing. Given HP’s higher throughput (and Foundry’s, when tested with just two interfaces) some performance issue with the software image could explain the difference.

It should be noted that Avaya supplied two 10G Ethernet interfaces for testing, vs. four from other vendors. Single-port-pair configurations are generally less stressful than the four-way full mesh we used to test other switches.

One other note is that there are small differences between theoretical maximum rates and the actual rates of Force10’s E1200. This does not mean the E1200 dropped frames. The IEEE specification for 10G Ethernet lets rates vary by up to 3,000 frames per second because of clock skew; in our tests, the actual amount of slippage was far less.

Delay tactics

For some users, delay and jitter (delay variation) are even more important measures of a switch than its speed, especially when real-time applications are involved. In Gigabit Ethernet switches, delays typically are measured in the tens of microseconds. We expected a tenfold delay reduction with the 10 Gigabit devices, but that’s not what we found.

Delay should be close to nil at 10 Gigabit rates. Consider a hypothetical perfect switch that adds no delay of its own. At 10 Gigabit rates, it would take just 67 nanosec to transmit a 64-byte frame and 1,230 nanosec to transmit a 1,518-byte frame. These numbers are far below the threshold at which the perceived performance of any application would be affected.

In the real world, delays are much higher (see Figure 2 ). With an offered load of 10%, where delay is the result of simple forwarding and no other effect such as queue buildup, we recorded average delay ranging from 4.3 microsec for Foundry’s FastIron 400, with 64-byte frames to 46 microsec for Avaya’s Cajun P882, with 1,518-byte frames. For the time-curious, 1 millisec is one-thousandth of a second; 1 microsec is one-thousandth of a millisec; and 1 nanosec is one-thousandth of a microsec.

While none of the delays are anywhere close to the point at which a single switch would affect application performance, there are two caveats to bear in mind. First, while it’s true that the point at which applications suffer is in the milliseconds, it’s also true that delay is cumulative. Thus, a network built with many switches could suffer from more delay overall.

Second, there’s no good reason why a 10 Gigabit device should hang on to a frame for 30 to 50 microsec. For example, for Force10’s E1200 to add 31.9 microsec when handling 64-byte frames; it had to buffer 46 frames at a time.

Force10 says the software it supplied for testing was optimized to produce the lowest delays under heavy loads. The vendor says its shipping software, Version 4.1.1, and a configuration change will reduce delay by up to 50%. We did not verify this claim.

The Foundry and HP boxes did the best job of keeping delay to a minimum. Even in the worst case — HP with 1,518-byte frames — average delay was only 7.6 microsec. That’s not just a big improvement over the delay that Gigabit Ethernet boxes add; it’s significantly lower than some other vendors’ best delay numbers with 10G Ethernet interfaces at any frame length.

For voice-over-IP or video applications, jitter is even more critical a metric than delay. Our jitter measurements showed that switches with the least delay — from Foundry and HP — also recorded negligible amounts of jitter (see Figure 2 ). For both vendors, jitter was as low as 100 nanosec, the minimum our test instruments could record.

To its credit, Ayava’s Cajun P882 also kept jitter down in the hundreds of nanoseconds, at least four orders of magnitude below the point at which application performance would suffer.

Force10’s jitter numbers were higher than the others and generally represented about 25% of the average delay. This means switch delay could swing up or down by 25% over time, and that’s a relatively big variation. While the amounts involved aren’t enough to degrade application performance by themselves, the earlier caveat about delay being cumulative holds: A network built with many Force10 switches could add significant jitter.

Backbone builders

While 10 Gigabit baseline tests give us a good idea of how the technology stacks up inside these switches, few if any network designers envision pure 10G Ethernet networks anytime soon. We also tested 10G Ethernet the way it’s more likely to be used: as an aggregation technology for multiple Gigabit Ethernet connections.

For the bandwidth-aggregation tests, we constructed a test bed comprising two chassis connected with a 10G Ethernet link. We also equipped each chassis with 10G (single) Ethernet interfaces, and offered traffic across the 10 Gigabit backbone. With 510 virtual hosts offering traffic to each of the Gigabit Ethernet interfaces, there were 10,200 hosts exchanging traffic — just the sort of thing one might find at the core of many large corporate networks.

It’s no coincidence this test bed is similar in design to the one we used in a previous evaluation of link aggregation (see The trouble with trunking ). A primary goal of this test was to determine if 10G Ethernet backbones offer any improvement over previous tests using link aggregation, where high frame loss and latency rule.

We again used the Spirent SmartBits to offer 64-, 256- and 1,518-byte frames to determine throughput, delay and jitter. In this case, we used a partial-mesh traffic pattern, meaning 10 interfaces on one chassis exchanged traffic with the 10 other interfaces across the 10 Gigabit backbone, and vice versa.

Force10’s E1200 switch again led the pack, delivering line-rate throughput at all three frame lengths (see Figure 3 ). The vendor’s aggregate throughput approached 30 million frames per second across two chassis with zero frame loss.

Foundry’s and HP’s results came in right up against the 8G bit/sec limit of their switch fabrics. Foundry’s results with 256- and 1,518-byte frames were significantly better than in the four-port 10G Ethernet baseline tests.

Avaya’s Cajun trailed the pack, with throughput of less than 5G bit/sec in every test. Avaya attributes this to the Cajun’s crossbar design, which becomes congested when utilization exceeds about 60% of its capacity. In this case, 60% of an 8G bit/sec switch fabric represents just about the levels we saw.

The good news for all vendors is that throughput over a 10 Gigabit backbone is significantly higher than the numbers we obtained in a previous test using link aggregation. In the worst case, we saw throughput tumble to just 10% of line rate with link aggregation; here, even the worst-case number was nearly five times lower. Clearly, it’s better to use a single physical pipe than a virtual one.

Less waiting

Going with 10G Ethernet backbone instead of link aggregation also offers benefits when it comes to delay and jitter. In previous tests, we saw delay jump by as much as 1,200% when we used link aggregation. In this year’s test, we saw only modest increases in delay and jitter compared with the pure 10 Gigabit numbers.

Switches from Foundry and HP did the best job of keeping average delay and jitter to low levels across all frame lengths (see Figure 4 ). At worst, Foundry’s FastIron added average delay of 32.3 microsec with 1,518-byte frames, far below the point at which applications would suffer. And while the FastIron’s delay is higher than the 7.6 microsec we recorded in the pure 10 Gigabit tests, remember that frames had to cross two chassis and two pairs of interfaces in this configuration, vs. just one chassis and pair of interfaces.

Delay and jitter were higher in this test with the Avaya and Force10 switches — much higher in Force10’s case. In the worst cases, Force10’s E1200 delayed 1,518-byte frames an average of 90.9 microsec, and delay for 1,518-byte frames going through Avaya’s Cajun varied by an average of 16.4 microsec. By themselves, these numbers are no cause for concern; they’re in the same ballpark as some Gigabit Ethernet switches, and Gigabit Ethernet was the gating factor in this configuration. Still, the Foundry and HP results show it is possible to achieve lower delay and jitter.

Fast failover

For many users, resiliency is an even more important consideration than throughput, jitter or delay. We assessed the switches’ ability to recover from a link failure by measuring how long it took to reroute traffic onto a secondary link.

In this test, we connected two switch chassis with two 10G Ethernet links and asked vendors to configure Open Shortest Path First  so that one link was designated as primary and the other as secondary. Then we offered traffic to a Gigabit Ethernet interface on one chassis and verified it was carried over the primary link to the other chassis. Once we verified traffic was being forwarded, we physically disconnected the primary link between chassis. This forced the switches to reroute traffic onto the secondary link. It takes some amount of time to make the change, and during the cutover some frames inevitably will be dropped. We derived the failover time from the number of frames lost.

Force10 supplied enough 10G Ethernet interfaces for us to repeat this test with two pairs of backbone links connected using 802.3ad link aggregation. Avaya couldn’t participate in this event because it did not supply the four 10G Ethernet line cards needed for a single-link failover test. We tested the other vendors by failing over a single backbone link.

Force10’s performance in our failover tests was another area of big improvement over previous assessments in link aggregation. The other vendors didn’t supply enough 10G Ethernet interfaces to try link aggregation, but their failover results were still impressive.

In previous tests, failover times increased by a factor of 10 when link aggregation was in use. Not so with Force10’s E1200 (see complete failover results). In this test, cutover time improved when Force10 enabled link aggregation, going from 474 millisec without link aggregation to 384 millisec with it.

Neither Foundry nor HP supplied enough 10G Ethernet interfaces to try link aggregation, but both vendors’ boxes failed over even faster than Force10’s switch — 237 millisec for Foundry and 313 millisec for HP.

QoS enforcement

When it comes to enforcing QoS parameters for different traffic classes at 10 Gigabit rates, no vendor delivered everything we requested. Here again, though, our results were far better than previous tests using link aggregation.

We used the same SmartBits script from the previous link aggregation test. We offered three different traffic classes and expected the switches to do four things.

First, switches should have marked traffic using Differentiated Services code points. Remarking frames is a good security practice; without it, users might mark all their traffic as high priority.

Second, we expected switches to deliver high-priority traffic without loss, even with congestion present.

Third, we asked vendors to configure the switches so that low-priority traffic would never consume more than 2G bit/sec of available bandwidth. This rate-controlling feature is critical for controlling low-priority flows such as streaming media feeds.

Finally, we expected switches to allocate remaining bandwidth to medium-priority traffic. Given our configuration it was possible to forward all medium-priority traffic without loss, but not all switches actually did so.

Deciding which switch did the best job depends on which of these four rules is most important (see graphic). If never dropping a high-priority frame is the most important criterion, then Avaya’s Cajun came out on top.

Then again, if coming closest to meeting the rules for all traffic classes matters most, then Force10’s E1200 wins this event. Though it did drop small amounts of high-priority traffic, the E1200 did the best job of the desired rates for all three traffic classes.

Results for Foundry and HP were a bit puzzling. While both vendors’ switches did a reasonable job in handling high- and medium-priority traffic, they were far too severe in rate-controlling low-priority traffic. Engineers from both companies said the switches cannot rate-limit one class while simultaneously enforcing drop preferences for other classes.

The good news for all vendors is that QoS enforcement across a 10 Gigabit backbone generally works better than it does across an aggregated link consisting of multiple Gigabit Ethernet links. Last time, we saw vendors drop significant amounts of high-priority traffic and got the ratios all wrong between traffic classes.

It would be a stretch to say the first generation of 10G Ethernet products turned in excellent results. For most switches, 8G and not 10G bit/sec seems to be the limit. Where line-rate throughput is possible, the cost is relatively high delay and jitter. But for whatever problems we found, the new 10 Gigabit switches offer one very convincing advantage over previous generations: They get beyond the gigabit barrier far better than the alternative, link aggregation.

Figure 1: 10G Ethernet throughput

Figure 2: 10G Ethernet delay and jitter

Fig. 3: Gigabit Ethernet throughput over a 10G Ethernet backbone

Fig. 4: Gigabit Ethernet average delay and jitter

– See a summary of how we ranked each of the tested devices.