Testing 10Gig Ethernet switches
Mixed results: Only Force10 delivers 10G bit/sec throughput, but all switches boast impressive features.
In this test, we connected two switch chassis with two 10G Ethernet links and asked vendors to configure Open Shortest Path First so that one link was designated as primary and the other as secondary. Then we offered traffic to a Gigabit Ethernet interface on one chassis and verified it was carried over the primary link to the other chassis. Once we verified traffic was being forwarded, we physically disconnected the primary link between chassis. This forced the switches to reroute traffic onto the secondary link. It takes some amount of time to make the change, and during the cutover some frames inevitably will be dropped. We derived the failover time from the number of frames lost.
Force10 supplied enough 10G Ethernet interfaces for us to repeat this test with two pairs of backbone links connected using 802.3ad link aggregation. Avaya couldn't participate in this event because it did not supply the four 10G Ethernet line cards needed for a single-link failover test. We tested the other vendors by failing over a single backbone link.
Force10's performance in our failover tests was another area of big improvement over previous assessments in link aggregation. The other vendors didn't supply enough 10G Ethernet interfaces to try link aggregation, but their failover results were still impressive.
In previous tests, failover times increased by a factor of 10 when link aggregation was in use. Not so with Force10's E1200 (see complete failover results). In this test, cutover time improved when Force10 enabled link aggregation, going from 474 millisec without link aggregation to 384 millisec with it.
Neither Foundry nor HP supplied enough 10G Ethernet interfaces to try link aggregation, but both vendors' boxes failed over even faster than Force10's switch -- 237 millisec for Foundry and 313 millisec for HP.
QoS enforcement
When it comes to enforcing QoS parameters for different traffic classes at 10 Gigabit rates, no vendor delivered everything we requested. Here again, though, our results were far better than previous tests using link aggregation.
We used the same SmartBits script from the previous link aggregation test. We offered three different traffic classes and expected the switches to do four things.
First, switches should have marked traffic using Differentiated Services code points. Remarking frames is a good security practice; without it, users might mark all their traffic as high priority.
Second, we expected switches to deliver high-priority traffic without loss, even with congestion present.
Third, we asked vendors to configure the switches so that low-priority traffic would never consume more than 2G bit/sec of available bandwidth. This rate-controlling feature is critical for controlling low-priority flows such as streaming media feeds.
Finally, we expected switches to allocate remaining bandwidth to medium-priority traffic. Given our configuration it was possible to forward all medium-priority traffic without loss, but not all switches actually did so.
Deciding which switch did the best job depends on which of these four rules is most important (see graphic). If never dropping a high-priority frame is the most important criterion, then Avaya's Cajun came out on top.
Then again, if coming closest to meeting the rules for all traffic classes matters most, then Force10's E1200 wins this event. Though it did drop small amounts of high-priority traffic, the E1200 did the best job of the desired rates for all three traffic classes.
Results for Foundry and HP were a bit puzzling. While both vendors' switches did a reasonable job in handling high- and medium-priority traffic, they were far too severe in rate-controlling low-priority traffic. Engineers from both companies said the switches cannot rate-limit one class while simultaneously enforcing drop preferences for other classes.
The good news for all vendors is that QoS enforcement across a 10 Gigabit backbone generally works better than it does across an aggregated link consisting of multiple Gigabit Ethernet links. Last time, we saw vendors drop significant amounts of high-priority traffic and got the ratios all wrong between traffic classes.
It would be a stretch to say the first generation of 10G Ethernet products turned in excellent results. For most switches, 8G and not 10G bit/sec seems to be the limit. Where line-rate throughput is possible, the cost is relatively high delay and jitter. But for whatever problems we found, the new 10 Gigabit switches offer one very convincing advantage over previous generations: They get beyond the gigabit barrier far better than the alternative, link aggregation.
- See a summary of how we ranked each of the tested devices.
Copyright © 2003 IDG Communications, Inc.