Americas

  • United States
by David Newman, Network World Lab Alliance

How we did it

Reviews
Mar 01, 20044 mins
Computers and PeripheralsNetwork SwitchesNetworking

How we tested the Foundry BigIron MG8 switch.

We asked Foundry to supply two switch chassis (priced at $38,000), a management module ($6,500), four 10G Ethernet interfaces ($50,000, plus $5,000 for each of four 1,310-nanometer LR Xenpak transceivers), and two of its new 40-port Gigabit Ethernet blades ($50,000 each, plus $445 each for each of 40 850-nanometer SFP transceivers). Foundry also supplied 850-nanometer SR Xenpak transceivers ($3,500) for interconnecting chassis because of the lower cost of the SR option.

As in earlier reviews, we assessed device performance in terms of pure 10G bit/sec throughput, delay and jitter; 1G bit/sec throughput, delay and jitter across a 10 Gigabit backbone; failover times; and quality-of-service (QoS) enforcement.

Our primary test instrument was the SmartBits performance analysis system from Spirent Communications, equipped with XLW-3721 TeraMetrics 10G Ethernet cards and LAN-3325 TeraMetrics XD 10/100/1000 Ethernet cards. We used Spirent’s SAI, SmartFlow, and TeraRouting applications to generate traffic.

For the 10G Ethernet and backbone tests, the test traffic consisted of 64-, 256-, and 1,518-byte Ethernet frames. The duration for all tests was 60 seconds, and the timestamp resolution of the SmartBits was plus or minus 100 nanosec.

In the 10G Ethernet tests, we asked Foundry to assign a different IP subnet to each of four 10G interfaces in one chassis. We configured the SmartBits to offer traffic from 510 virtual hosts per interface in a fully meshed pattern (meaning traffic was destined for all other interfaces). We measured throughput, latency and jitter.

In the backbone tests, we asked vendors to set up two chassis, each equipped with one 10G Ethernet interface and 10 edge interfaces using Gigabit Ethernet. We again asked vendors to assign a different IP subnet to each edge interface and we configured the SmartBits to offer traffic from 510 virtual hosts per interface. This time, we offered traffic in a partially meshed multiple-device pattern; as defined in RFC 2889, that means the traffic we offered to one chassis was destined to all interfaces on the other chassis and vice versa. Once again, the metrics were throughput, latency and jitter.

In the failover tests, we set up two chassis, each equipped with one Gigabit Ethernet and two 10G Ethernet interfaces. We asked vendors to configure Open Shortest Path First metrics to that one 10G Ethernet interface, which would act as a primary route, and one 10G Ethernet interface would function as a secondary.

We offered a single flow of 64-byte frames to one Gigabit Ethernet interface at a rate of 1,000,000 frame/sec; thus, we transmitted one frame every 1 microsec. Approximately 10 seconds into the test, we physically disconnected the primary link, forcing the switch to reroute traffic onto the secondary path. We derived failover time from frame loss. We attempted to repeat the same test with 2 million flows, forcing 1 million to be failed over, but were unable to complete this event because of design issues with the Foundry device.

In the QoS enforcement tests, we set up two chassis, each equipped with 12 Gigabit Ethernet interfaces and one 10G Ethernet backbone interface. Because we offered all 24 edge interfaces 128-byte frames at line rate in a partially meshed pattern, we congested the switches by a 12-to-10 ratio. For this test we offered three classes of traffic in a 1-to-7-to-4 ratio.

We asked Foundry to configure its switches to enforce four conditions. First, they would have to mark incoming frames using specified Differentiated Services code points, something we verified by capturing and decoding traffic. Second, of the three traffic classes we offered, the switches should have delivered all high-priority traffic without loss. Third, the switches should have limited the rate of low-priority traffic so that it would not consume more than 2G bit/sec of backbone capacity. Finally, the switches should have allocated any remaining bandwidth to medium-priority traffic.

As a check against allocating a fixed amount of bandwidth to high-priority traffic, we reran the tests with only medium- and low-priority traffic present in a 9-to-3 ratio. Foundry was not allowed to reconfigure devices between the first and second tests, and we expected the switches to allocate bandwidth previously used by high-priority traffic to the other classes.

Back to review: Foundry BigIron MG8