How we tested Cisco's VSS

We assessed VSS performance with three sets of tests measuring fabric bandwidth and delay, failover times, and unicast/multicast performance across a network backbone.

For all tests described here, we configured a 10,000-line access control list (ACL) covering layer-3 and layer-4 criteria and spot-checked that random entries in the ACL blocked traffic as intended. As a safeguard against users making unauthorized changes, Cisco engineers also configured access and core switches to re-mark the diff-serve code point (DSCP) in every packet, and we verified re-marking using counters in the Spirent TestCenter traffic generator/analyzer. Cisco also enabled NetFlow traffic monitoring for all test traffic.

To assess the fabric bandwidth and delay, the system under test was one pair of Cisco Catalyst 6509-E switches. Cisco engineers set up a virtual switch link (VSL) between the switches, each equipped with eight WS6408 10G Ethernet line cards and one Virtual Switching Supervisor 720-10G management/switch fabric card. That left a total of 130 10G Ethernet test ports: Eight on each of the line cards, plus one on each of the management cards (we used the management card's other 10G Ethernet port to set up the virtual link between switches).

Using the Spirent TestCenter traffic generator/analyzer, we offered 64-, 256- and 1518-byte IPv4 unicast frames on each of the 130 10G test ports to determine throughput and delay. We measured delay at 10% of line rate, consistent with our practice in previous 10G Ethernet switch tests. The Spirent TestCenter analyzer emulated 100 unique hosts on each port, making for 13,000 total hosts.

In the failover tests, the goal was to compare VSS recovery time upon loss of a switch with recovery using older redundancy mechanisms.

This test involved three pairs of Catalyst 6509 switches, representing the core, distribution and access layers of an enterprise network. We ran the failover tests in three configurations. In the first scenario, we used legacy redundancy mechanisms such as rapid spanning tree and hot standby routing protocol (HSRP). Then we ran two failover scenarios using VSS, first with a virtual link on the distribution switches alone, and again with VSS links on both the distribution and core switches.

For each test, we began by offering traffic to each of 16 interfaces on the core and access sides of the test bed. We began the failover tests with a baseline event to verify no frame loss existed. While Spirent TestCenter offered test traffic for 300 seconds, we cut off power to one of the distribution switches. Because we offered traffic to each interface at a rate of 100,000 frames per second, each dropped frame represented 10 microsec of recovery time. So, for example, if Spirent TestCenter reported 32,000 lost frames, then failover time was 320 millisec.

The backbone performance tests used a setup similar to the VSS configurations in the failover tests. Here again, there were three pairs of Catalyst 6509 switches, representing core, distribution and access layers of an enterprise network. Here again, we also conducted separate tests with a virtual link on the distribution switches, and again with virtual links on the distribution and core switches.

To represent enterprise conditions, we set up very large numbers of routes, hosts and flows in these tests. From the core side, we configured Open Shortest Path First to advertise 176,000 unique routes. On the access side, we set up four virtual LANs (VLAN), each with 250 hosts, on each of 16 ports, for 16,000 hosts total. In terms of multicast traffic setup, one host in each access-side VLAN joined each of 40 groups, each of which had 16 transmitters; with 16 core-side interfaces. In all, this test represented more than 10,000 multicast routes, and more than 5.6 billion unique unicast flows.

In the backbone tests, we used a partially meshed traffic pattern to measure system throughput and delay. As defined in RFC 2285, a partial mesh pattern is one in which ports on both sides of the test bed exchange traffic with one another, but not among themselves. In this case, that meant all access ports exchanged traffic with all core ports, and vice-versa.

We tested all four combinations of unicast, mixed multicast/unicast, and virtual switching enabled and disabled on the core switches (virtual switching was always enabled on the distribution switches and always disabled on the access switches). In all four backbone test setups, we measured throughput and delay.

We conducted these tests in an engineering lab at Cisco's campus in San Jose. This is a departure from our normal procedure of testing in our own labs or at a neutral third-party facility. The change was borne of logistical necessity: Cisco's lab was the only one available within the allotted timeframe with sufficient 10G Ethernet test ports and electrical power to conduct this test. Network Test and Spirent engineers conducted all tests and verified configurations of both switches and test instruments, just as we would in any test. The results presented here would be the same regardless of where the test was conducted.


< Return to main test

Learn more about this topic

 
Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:
Must read: 10 new UI features coming to Windows 10