How We Tested 10Gigabit Ethernet switches
We invited 13 vendors to participate in this test. Besides the six that accepted (Arista, Blade, Cisco, Dell, Extreme, HP ProCurve), we also invited 3Com/H3C, Brocade, Enterasys, Force10, Fujitsu, Juniper, and Raptor.
We invited 13 vendors to participate in this test. Besides the six that accepted (Arista, Blade, Cisco, Dell, Extreme, HP ProCurve), we also invited 3Com/H3C, Brocade, Enterasys, Force10, Fujitsu, Juniper and Raptor.
3Com/H3C, which was bought by HP during testing, earlier declined due to resource constraints (which, we later learned, had to do with preparation for the acquisition by HP). Brocade, Enterasys, Force10 and Juniper also cited resource constraints. Fujitsu and Raptor did not respond to our invitation.
We assessed switches in 10 areas: features; management and usability; power consumption; MAC address capacity; unicast and multicast throughput; unicast and multicast latency and jitter; link aggregation fairness; multicast group capacity; multicast join/leave delay; and forward pressure.
For all but the first two areas, we used the Spirent TestCenter traffic generator/analyzer equipped with 24 10-gigabit Ethernet Hypermetrics CV modules and 10GBase-SR transceivers.
To assess switch features, we asked vendors to complete a detailed questionnaire. We did not verify every answer to this questionnaire.
The management and usability assessment was based in equal part on our use of the switches during testing and on responses to the features questionnaire (for example, in terms of supported network management methods).
We measured power consumption using Fluke 322 and Fluke 335 clamp meters. This test involved three measurements: AC line voltage; AC amperage when idle; and AC amperage when fully loaded. We fully loaded the switch control and data planes by configuring Spirent TestCenter to offer traffic at line rate to all ports. We derived wattage by multiplying voltage and amperage. We then removed 12 transceivers from each switch and repeated these measurements.
To measure MAC address capacity, we used the RFC 2889 wizard in Spirent TestCenter. This wizard conducts a binary search to find the largest number of MAC addresses a switch can learn without flooding. In all test iterations, Spirent TestCenter's MAC address aging timer is set to twice that of the switch under test.
To measure unicast throughput, latency and jitter, we configured Spirent TestCenter to offer traffic to all ports in a fully meshed pattern. For each test, we conducted separate 60-second runs with 64-, 65-, 108-, 256-, 1,518- and 9,216-byte frames, using a binary search to determine the throughput rate. For each frame length, we measured throughput, average and maximum latency and average and maximum jitter.
The multicast throughput, latency and jitter tests used the same frame lengths as in the unicast tests. Here, we configured a single Spirent TestCenter port to transmit multicast traffic, and the remaining 23 ports to join the same 989 multicast groups.
To assess link aggregation fairness, we configured Spirent TestCenter to act as a link aggregation partner using link aggregation control protocol (LACP), and also to emulate transmit and receive hosts on seven pairs of ports. Initially we brought up a link aggregation group (LAG) consisting of eight ports, and then offered unidirectional traffic to seven ports (in the direction from emulated ports on TestCenter to the switch, then on to LAG members on TestCenter, and then on to emulated hosts on TestCenter). We offered traffic at 10% of line rate to avoid oversubscription of any LAG member. Then we disabled LACP on one TestCenter port and offered the same traffic as in the eight-member LAG tests. In both cases, we recorded packets received on each LAG member, and derived fairness by calculating standard deviation across received-frame counts.
To measure multicast group capacity, we used the RFC 3918 wizard in Spirent TestCenter. This wizard joins a fixed number of groups, and then attempts to forward traffic to all groups. The test instrument used a binary search to find the highest number of groups joined successfully. The test passes if the switch forwards traffic to all groups. If the switch fails to forward traffic to one or more of the groups joined, the test fails. We used the most stressful possible condition of having receivers on 23 ports concurrently join all groups.
To measure multicast group join and leave delays, we again used TestCenter's RFC 3918 wizard. The delay tests work the opposite way from the throughput and latency tests: Here, we offer multicast traffic to the switch even though the IGMP snooping table is empty. Then we offer IGMP join messages on all receiver ports. The "join delay" is the interval between transmission of a join message on a given port and receipt of the first multicast frame for that group on that port. For the delay tests, it's the other way around: We measure the interval between transmission of a leave message until the switch stops forwarding traffic to that group on that port.
To measure forward pressure, we again used TestCenter's RFC 2889 wizard. This wizard allows users to configure an illegally small gap between frames – less than the 12-byte gap permitted in the 802.3 Ethernet specification. Since Spirent TestCenter offers traffic to the switch faster than the legal limit, the switch will drop some traffic – but its forwarding rate also should give some indication of how fast its clock is set.
Copyright © 2010 IDG Communications, Inc.