• United States
by David Newman, Network World Lab Alliance

Foundry Networks’ Big Iron MG8

Mar 01, 20046 mins
Computers and PeripheralsNetwork SwitchesNetworking

A true 10G Ethernet switch, but failover tests raise resiliency issues

A true 10G Ethernet switch, but failover tests raise resiliency issues.

In an exclusive Network World lab test, Foundry Networks’ BigIron MG8 switch proved to be one of only two enterprise backbone switches to deliver wire-rate throughput on all interfaces of its 10G Ethernet  line cards. Plus it’s the only one to do so with minimal delay and jitter.

The MG8, which includes not only 10G Ethernet interfaces but also a new 40-port Gigabit Ethernet blade, also demonstrated first-rate, quality-of-service (QoS ) enforcement capabilities.

How we did it

Foundry’s MG8 features

Performance charts from our testing

Archive of Network World reviews

Subscribe to the Product Review newsletter

However, while the MG8 lives up to its “Mucho Grande” moniker in terms of raw horsepower and traffic control, the late beta version Foundry supplied of its new 40-port Gigabit Ethernet line card has a few performance kinks. More seriously, in our failover tests while the MG8 rerouted one flow very fast, recovery times might increase along with flow counts.

The vendor says a firmware upgrade due next month will improve performance on its 40-port card. Foundry also says a larger switch/router – the NetIron 40G – will address the failover issue. Late next month we plan to test the upgraded 40-port card and the 40G chassis.

We used Spirent’s SmartBits to measure the MG8’s throughput and delay – the same way we’ve tested 10G switches in the past (see reviews here and here) – in four configurations:

•  A pure 10G Ethernet setup with four interfaces.•  Between groups of Gigabit Ethernet interfaces exchanging traffic across a 10G Ethernet backbone.•  Within the 40-port Gigabit Ethernet line card.•  Between the 40-port card and four 10G Ethernet interfaces (see How we did it).

Foundry’s best results came during the pure 10G Ethernet tests. The four-port 10G Ethernet module handled small, midsize and large frames at full 10-Gigabit line rate with zero loss (see the throughput graph).

The MG8 also delivered line-rate performance in our basic backbone test. This configuration tests 10G Ethernet the way it’s most likely to be used – as an aggregation technology for multiple Gigabit Ethernet links.

However, results were less than perfect in tests of Foundry’s 40-port Gigabit Ethernet line card. The late beta version we tested forwarded 64-byte frames at line rate, but dropped 256- and 1,518-byte frames in some tests.

In our 40-port full-mesh tests, the card delivered line-rate throughput with short frames, but throughput with 256-byte frames was equivalent to 96.9% of line rate. When handling 1,518-byte frames, the MG8’s new Gigabit Ethernet blade maxed out at 83% of line rate.

In tests where the 40-port Gigabit Ethernet card exchanged traffic with four 10G Ethernet interfaces – which demonstrates how the switch will perform as part of a 10G Ethernet backbone – the MG8 forwarded 64- and 256-byte frames at line rate. Throughput for 1,518-byte frames fell to the equivalent of 40.2% of line rate.

The MG8 put up impressive delay and jitter numbers, meaning delays will not affect application performance.

In the pure 10G Ethernet tests, the MG8 introduced delay of between 6.8 and 13.9 microsec, depending on frame length (see delay graph). That’s comparable to those for Cisco’s 10G Ethernet blade.

However, because of a configuration error on our part, we threw 10 times as much traffic at Foundry’s switch as Cisco’s when measuring latency. Even under these conditions, the MG8 kept delay low and consistent. Jitter (delay variation) was a maximum of 2.5 microsec.

In delay tests of Gigabit Ethernet across a 10G Ethernet backbone, a pair of MG8s held up frames anywhere from 18.4 to 60.2 microsec for short and long frames, respectively.

Within a single 40-port blade, average delay ranged from 7.8 to 24.6 microsec. When moving traffic between the 40-port blade and 10G Ethernet interfaces, delay ranged from 9.3 to 20 microsec.

Failover foibles

Our failover tests measure the MG8’s ability to move traffic onto a secondary link when a primary link fails. Because availability trumps performance for many network professionals, this was an important test.

Things began well enough. We measured failover of a single flow using three technologies, and in all cases the switch redirected traffic in 34 msec or less. That’s better than Foundry’s first-generation product, and slightly faster than single-flow numbers for Cisco’s Catalyst 6500.

However, single-flow measurements aren’t terribly meaningful in an enterprise context, where huge numbers of flows might be involved. We found that Cisco Catalyst 6500 failover times for 1 million flows were similar to those for one flow.

Company: Foundry Networks Cost: $182,260 as tested. Pros: 10G cards have line-rate throughput and low latency; excellent QoS enforcement. Con: Scalability issues with failover, 40-port 1G cards are blocking.
The breakdown    

10G Ethernet performance 25%

4 ports across 1 line card (IPv4)

Gigabit Ethernet performance 

over 10G backbone 25%

10 1G ports on each of 2 chassis over 10G backbone (IPv4)

40 1G ports on a single line card (IPv4)

40 1G ports on a single line card and 4 ports on 10G line card (IPv4)  

QoS enforcement 25%

12 1G ports on each of 2 chassis over 10G backbone (IPv4)  

Failover 15%

1 1G port on each of 2 chassis over 10G backbone, 1 flow (IPv4)  
Features 10%  4.5
Scoring Key: 5: Exceptional; 4: Very good; 3: Average; 2: Below average; 1: Consistently subpar

We could not test the MG8 this way because it cannot hold a routing table with 1 million entries. That’s hardly a fatal flaw given that routing tables even at large companies are more on the order of 1,000 entries. But we were unable to run our test even with 1,000 entries. The MG8’s design requires a new entry in its Layer 2 forwarding table every time there’s a change in a flow’s Layer-3 routing information. Because the MG8 cannot forward traffic without a table entry, failover time increases with the number of flows being failed over.

Large numbers of routes can disappear from a backbone switch/router for reasons beyond a corporation’s control, such as an Internet route flap. In such situations, flow-based designs such as the MG8’s will take longer to reroute traffic than devices that “prepopulate” the forwarding database as they learn routes.

Foundry says failover times haven’t been a problem even for its large enterprise customers.

Our QoS tests assessed the MG8’s ability to perform two types of prioritization at once. The goal was to see if the MG8 could protect the high-priority traffic while simultaneously limiting low-priority traffic to no more than 2G bit/sec. The MG8 met both QoS goals.

Network World gratefully acknowledges the vendors who supported this project. Spirent Communications supplied its SmartBits traffic generator/analyzer system, including XLW-3721 10G Ethernet cards and

LAN-3325 TeraMetrics XD 10/100/1000 Ethernet cards.

Thanks also to Siemon Co., which supplied a variety of multimode cabling specially for this project.

Newman is president of Network Test, an independent benchmarking and network design consultancy in Westlake Village, Calif. He can be reached at