Cisco's Catalyst 6500 raises the stakes

Cisco might be a relative latecomer to 10G Ethernet switching, but it's hardly playing catch-up.

Cisco might be a relative latecomer to 10G Ethernet switching, but it's hardly playing catch-up. Our exclusive lab tests show that new line cards and management modules for Cisco's Catalyst 6500 switches push the performance envelope in a number of ways:

•  Line-rate throughput with low delay and jitter. The Catalyst becomes only the second product tested to fill a 10G pipe.


How we did it

Archive of Network World reviews

Subscribe to the Product Review newsletter


•  Fast failover. The Catalyst set records for recovery times.

•  Perfect prioritization. The Catalyst is the only product that can protect high-priority traffic while simultaneously rate-limiting low-priority traffic.

•  IPv6 routing. In the first-ever public test of IPv6 routing, the Catalyst moved traffic at line rate even when handling 250 million flows.

The Catalyst's stellar performance in our tests, along with its rich feature set, earned it a World Class Award. Simply put, this is the highest-performing 10G Ethernet product we've tested to date.

To ensure an even comparison, we ran the same tests on Cisco's new gear - WS-X6704-10GE line cards and WS-SUP720 management modules - that we used in an assessment of 10G Ethernet products early this year (see here). Tests included pure 10G Ethernet performance; Gigabit Ethernet across a 10G Ethernet backbone; quality-of-service (QoS) enforcement; and failover times. For this review, we added failover and IPv6 forwarding and routing (see How we did it).

In the 10G Ethernet tests, we used Spirent Communications' SmartBits to generate traffic in a four-port, full-mesh configuration. Cisco's 10G Ethernet cards delivered line-rate throughput for all tests (see Table 1 ). That puts the Catalyst on par with the E1200 from Force10 Networks .

We should note that Cisco's 10G Ethernet cards are blocking - which causes frame loss - when all four ports exchange 64-byte frames between line cards. This was not an issue in our tests because we moved traffic between two ports on each of two cards. We think that's a fair comparison with previous products tested. Most of those had just one port per card, not four, so all previous tests were also across cards. Cisco says the new cards are nonblocking when handling a mix of frame sizes, but we did not verify this.

Delay and jitter with the Cisco 10G Ethernet cards weren't quite as low as previous record-holders from Foundry Networks and HP, but the numbers were well below the point at which application performance might suffer (see Table 2).

In the worst case (delay for 1,518-byte frames under 10% load), Cisco's average delay was 12.4 microsec, compared with 7.5 microsec for Foundry. Jitter was 0.5 microsec, compared with 0.6 microsec for Foundry in a similar test. Neither result will affect application performance.

We also conducted tests the way 10G Ethernet is most likely to be used - as a backbone technology. We built a test bed comprising two chassis connected with a 10G Ethernet link. Each chassis also had 10 (single) Gigabit Ethernet interfaces. We offered traffic from 510 virtual hosts to each Gigabit Ethernet interface, meaning there were 10,200 hosts exchanging traffic in a meshed pattern.

The Cisco setup delivered line-rate throughput at all frame sizes, and delay and jitter again trailed Foundry and HP by an insignificant margin. Cisco's highest average delay (with 1,518-byte frames) was 35.5 microsec, compared with 31.3 microsec for HP. Again, the difference isn't meaningful.

Introducing IPv6

It's important to understand why IPv6 testing matters to enterprise network managers today. The conventional wisdom is that IPv6 is only of interest in Asia, and there mainly as a science project. That perception is misguided, for two reasons.

First, depreciation schedules for backbone gear might run as long as five years, and by then IPv6 deployment is likely to be more extensive than today. Second, companies doing business with the federal government might need IPv6 support much sooner than that. Starting this month, the Department of Defense is requiring IPv6 in systems it evaluates, and other agencies are likely to follow suit.

Cisco's results with IPv6 traffic were nearly identical to those with IPv4. The vendor's new 10G cards delivered line-rate throughput in all cases. Delay and jitter were actually lower with short- and medium-length IPv6 frames than with IPv4, and delay with long frames was only slightly elevated.

All public tests of IPv6 to date have focused on forwarding rather than routing, mainly because IPv6 routing protocols are only now coming to market. Cisco's WS-SUP720 management module supports OSPFv3, the IPv6-enabled version of the popular routing protocol Open Shortest Path First . This was the first appearance of IPv6 routing in a public test.

We used Spirent's TeraRouting software to advertise 100,000 unique routes (each representing one network) over OSPFv3 to the pair of Catalysts. Because address scalability is a major selling point for IPv6, we then sent traffic to each of 250 virtual hosts on all 100,000 networks. This works out to 250 million total flows: On each of two chassis, we offered traffic to 10 interfaces from 250 hosts, each sending traffic to 50,000 networks on the other chassis.

To put this number in perspective, imagine we took the combined population of the 11 largest U.S. cities, gave everyone a computer, and routed all their traffic through one pair of Catalyst switches. To Cisco's credit, it did so at line rate, with average delays.

QoS enforcement

 Cisco Catalyst 6500 10 Gigabit Ethernet line cards

OVERALL

RATING

4.75
 
Company: Cisco Cost: $99,995 as tested. Pros: Line-rate throughput; low latency and jitter; highly scalable; excellent IPv6 routing and forwarding. Cons: Four-port 10G card is blocking.
The breakdown  

10G Ethernet performance 25%

4 ports across 2 line cards (IPv4) 4 ports across 2 line cards (IPv6) 
4

Gigabit Ethernet performance 

over 10G backbone 25%

10 1G ports on each of 2 chassis over 10G backbone (IPv4) 10 1G ports on each of 2 chassis over 10G backbone, 100,000 routes (IPv6)  
5

QoS enforcement 25%

12 1G ports on each of 2 chassis over 10G backbone (IPv4)  
5

Failover 15%

1 1G port on each of 2 chassis over 10G backbone, 1 flow (IPv4) 1 1G port on each of 2 chassis over 10G backbone, 2 million flows (IPv4)  
5
Features 10%  5
TOTAL SCORE 4.75
Scoring Key: 5: Exceptional; 4: Very good; 3: Average; 2: Below average; 1: Consistently subpar

Cisco's Catalyst also outperformed previously tested products when it came to QoS enforcement. In this test, we offered three classes of traffic and required the switch to deliver high-priority traffic with no loss, even during congestion.

We also required the switch to restrict low-priority traffic so that it never used more than 2G bit/sec of bandwidth. And just to make things interesting, we emulated 252 hosts on each of 20 edge ports - making 5,040 virtual hosts in all.

Previously, other vendors protected high-priority traffic but couldn't rate-control low-priority traffic. Cisco did both: The Catalyst 6500 not only delivered all high-priority traffic without loss, but also came within 99.99% of hitting our low-priority bandwidth enforcement goal.

Failure? What failure?

Our failover tests assessed how quickly a switch reroutes traffic onto a secondary link upon failure of a primary circuit. We tested Cisco's failover with both OSPF and IEEE 802.3ad link aggregation.

In our OSPF failover tests, the Catalyst rerouted traffic in an average of 195 millisec. That's slightly better than the 237-millisec failover Foundry posted in a previous test. With link aggregation, the Catalyst reduced failover time to just 45 millisec.

A Challenge to Other Vendors
Vendors of 10G Ethernet equipment, listen up: Network World has a standing offer to rerun the same tests performed here on your equipment. For product requirements and other information on having your system tested, contact Christine Burns at cburns@nww.com.

Cisco says other switches had too easy a time because we used only a single flow in our previous failover tests.

Cisco contends that other switches are flow-based, meaning they build Layer-2 forwarding tables based on the number of flows involved, and thus failover times increase as flow count increases. In contrast, the Catalyst's routing reduces Layer-2 flow counts, which lets it handle arbitrarily large numbers of flows with no performance hit.

We ran the failover test with 2 million flows, meaning that traffic for 1 million flows would be "failed over." The Catalyst's performance improved in this test, with OSPF failover taking 86 millisec and link aggregation failing over in 18 millisec. This validates Cisco's claim that it can support a large number of flows; we'll see how other vendors do in future tests.

Table 1: Cisco's 10G Ethernet throughput

Cisco's 10G Ethernet line cards in the Cisco Catalyst 6500 hit theoretical maximum throughput for every test we threw at it.

Four 10G Ethernet interfaces, fully meshed IPv4 traffic

Frame length (bytes) Theoretical maximum (frames per second) Throughput per port (frames per second)
64 14,880,952 14,880,952
256 4,528,985 4,528,985
1,518 81,274 81,274

20 Gigabit Ethernet interfaces, 10G backbone, partially meshed IPv4 traffic

64 1,488,095 1,488,095
256 452,899 452,899
1,518 81,274 81,274
20 Gigabit Ethernet interfaces, 10G backbone, partially meshed IPv6 traffic
76* 1,302,083 1,302,083
256 452,899 452,899
1,518 81,274 81,274

20 Gigabit Ethernet interfaces, 10G backbone, 100,000 networks,

250 million IPv6 flows
76* 1,302,083 1,302,083
*76 bytes is minimum IPv6 length supported by test equipment.

Table 2: Cisco's 10G Ethernet delay and jitter

Delay and jitter with the Cisco 10G Ethernet cards weren't quite as low as previously tested boxes from Foundry Networks or HP, but Cisco's numbers are well below the point at which application degradation would occur.

Four 10G Ethernet interfaces, fully meshed IPv4 traffic

Frame length (bytes) Average delay (microsec) Jitter (microsec)
64 10.2 0.3
256 10.0 0.6
1,518 12.4 0.5

20 Gigabit Ethernet interfaces, 10G backbone, partially meshed IPv4 traffic

64

19.3

0.7
256 20.9 1.0
1,518 35.5 1.7
20 Gigabit Ethernet interfaces, 10G backbone, partially meshed IPv6 traffic
76 18.1 0.8
256 20.0 0.5
1,518 39.4 3.0

20 Gigabit Ethernet interfaces, 10G backbone, 100,000 networks,

250 million IPv6 flows
76 19.9 Not tested

Learn more about this topic

Newman is president of Network Test, an independent benchmarking and network design consultancy in Westlake Village, Calif. He can be reached at dnewman@networktest.com.

NW Test Alliance

Global Test Alliance

Newman is also a member of the Network World Global Test Alliance, a cooperative of the premier reviewers in the network industry, each bringing to bear years of practical experience on every review. For more Test Alliance information, including what it takes to become a member, go to www.nwfusion.com/alliance.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Now read: Getting grounded in IoT