How we tested the various 10 Gigabit Ethernet switches.We asked vendors to supply two\u00a0switch\u00a0chassis, up to four 10G Ethernet interfaces, and a total of 24 Gigabit Ethernet interfaces. We assessed device performance in terms of pure 10G bit\/sec throughput, delay and jitter; 1G bit\/sec throughput, delay and jitter across a 10 Gigabit backbone; failover times; and\u00a0quality-of-service\u00a0enforcement.Our primary test instrument was the SmartBits performance analysis system from\u00a0Spirent Communications, equipped with XLW-3720A TeraMetrics 10G Ethernet cards and LAN-3311 TeraMetrics Gigabit Ethernet cards.For the 10G Ethernet and backbone tests, the test traffic consisted of 64-, 256-, and 1,518-byte Ethernet frames. The duration for all tests was 60 seconds, and the time stamp resolution of the SmartBits was plus or minus 100 nanosec.In the 10G Ethernet tests, we asked vendors to assign a different IP subnet to each of four 10G interfaces in one chassis. We configured the SmartBits to offer traffic from 510 virtual hosts per interface in a fully meshed pattern (meaning traffic was destined for all other interfaces). We measured throughput, average delay at 10% load and jitter.In the backbone tests, we asked vendors to set up two chassis, each equipped with one 10G Ethernet interface and 10 edge interfaces using Gigabit Ethernet. Here again, we asked vendors to assign a different IP subnet to each edge interface and we configured the SmartBits to offer traffic from 510 virtual hosts per interface. This time, we offered traffic in a partially meshed multiple-device pattern; as defined in RFC 2889, that means the traffic we offered to one chassis was destined to all interfaces on the other chassis and vice versa. Once again, the metrics were throughput, average delay at 10% load and jitter.In the failover tests, we set up two chassis, each equipped with one Gigabit Ethernet and two 10G Ethernet interfaces. We asked vendors to configure Open Shortest Path First metrics to that one 10G Ethernet interface, which would act as a primary route, and one would function as a secondary. We offered 64-byte frames to one Gigabit Ethernet interface at a rate of 100,000 frames per second; thus, we transmitted one frame every 10 microsec. Approximately 10 seconds into the test, we physically disconnected the primary link, forcing the switch to reroute traffic onto the secondary path. We derived failover time from frame loss.In the QoS enforcement tests, we set up two chassis, each equipped with 12 Gigabit Ethernet interfaces and one 10G Ethernet backbone interface. Because we offered all 24 edge interfaces 128-byte frames at line rate in a partially meshed pattern, we congested the switches by a 12-to-10 ratio. For this test we offered three classes of traffic in a 1-to-7-to-4 ratio.We asked vendors to enforce four conditions. First, they would have to mark incoming frames using specified Differentiated Services code points, something we verified by capturing and decoding traffic. Second, of the three traffic classes we offered, the switches should have delivered all high-priority traffic without loss. Third, the switches should have limited the rate of low-priority traffic so that it would not consume more than 2G bit\/sec of backbone capacity. Finally, the switches should have allocated any remaining bandwidth to medium-priority traffic.As a check against allocating a fixed amount of bandwidth to high-priority traffic, we reran the tests with only medium- and low-priority traffic present in a 9-to-3 ratio. Vendors were not allowed to reconfigure devices between the first and second tests, and we expected the switches to allocate bandwidth previously used by high-priority traffic to the other classes.A more detailed version of the test methodology is\u00a0available here. Back to the main review: "Testing 10 Gig Ethernet switches"