Cisco's ASR 1000 router built for 10-year tenure

Tests show ASR 1000 to be powerful, versatile swap for 7200-series routers

With enterprises looking to consolidate data centers and devices, Cisco's new ASR 1000 series router offers a compelling message: Do more with less.

How we tested Cisco's router

Watch a slideshow of the product

Archive of Network World tests

In an exclusive Clear Choice test, the ASR not only moved traffic at 20Gbps but also did so while running QoS, security and monitoring functions on 120 million flows from hundreds of concurrent routing sessions.

The ASR also proved a capable performer when handling multicast and IPSec VPN traffic. And with a 40-core processor, the ASR has enough headroom to run firewalls, load balancers and other services without requiring additional hardware.

That's not to say the ASR isn't still a work in progress. Its data-plane capacity still needs to grow, and Cisco hasn't yet rolled out all the services that ASRs eventually will support. But this is a strong initial effort, well worth considering for the many enterprises looking to replace tiers of aging 7200 routers with a single more powerful system.

Net results/scorecard

Introducing the ASR

ASR 1000 series hardware -- which began shipping last April and was upgraded in November (see announcement blogs) -- has three components: an embedded service processor (ESP) for data-plane traffic, a route processor (RP) for control-plane functions and one or more line cards. The ASR family includes two-, four- and six-slot models; for this test Cisco supplied the top-of-the-line six-slot ASR 1006 with redundant RP and ESP modules and power supplies.

The ASR's most notable new feature is its ESP module, all of which features the 40-core Quantum Flow Processor (QFP).Through separate software licenses, QFP supports numerous services such as firewalls, NetFlow and Nbar classifiers and, in the future, caching load balancers. The ESP module also offers powerful QoS features, with 128,000 queues and support for up to 1,000 global policies and classification maps.

While the RP is functionally similar to Cisco 7200 routing modules, it scales higher; a million Border Gateway Protocol routes and hundreds of thousands of Open Shortest Path First (OSPF) routes are possible. Scalability also extends to the number of routing sessions: Our tests involved hundreds of concurrent OSPF sessions, something we haven't been able to set up with earlier midrange Cisco routers. The RP also offers an integrated session border controller for VoIP traffic and unified communications.

ASR line cards use the same shared port adapter (SPA) design as Cisco Catalyst 7600, Cisco 12000 and CRS-1 routers and are interchangeable among them, which should help control sparing costs. The SPA modules in turn fit into SPA interface processor (SIP) line cards.

The ASR's operating system is IOS XE, a Linux-based variant of Cisco's IOS software. XE looks and feels similar to IOS on 7200 routers, but it's actually just another process running under Linux. Unlike earlier versions where a problem with one process could crash the whole system, this modular design should help contain faults.

On the downside, the IOS XE command-line interface doesn't leverage powerful Unix/Linux shell features. Pattern matching of command output is limited; there's no inline configuration editing; and IOS XE does not accept IPv4 addresses entered using classless inter-domain routing (CIDR) notation.

We assessed the ASR with tests of unicast and multicast performance and scalability, high availability and IPSec tunnel capacity (see "How we did it").

In unicast tests, we put an emphasis on services above and beyond simple packet blasting. In addition to enabling OSPF as the routing protocol, we configured the ASR 1006 so that each of 205 subinterfaces had two 103-line access control lists (ACL) applied. On the QoS front, the routers classified and queued up to four different traffic types. We also enabled unicast reverse path forwarding (uRPF) is correct and NetFlow accounting. (See the full system configurations used for testing.)

Many current routers and switches use NetFlow to track, at most, tens of thousands of flows. The previous high-water mark in any test we've done was 512,000 flows (see Cisco Nexus test).

The ASR's NetFlow cache can track 2 million flows at any one time. But with even more flows – and our tests introduced 120 million flows in as little as 12 seconds – the ASR will simply do "emergency aging" of older flows with no performance penalty. This is with full NetFlow monitoring; larger numbers of flows could be monitored using sampling techniques.

We also ran OSPF on a large scale, both in terms of session count and routing table size. Cisco configured OSPF to run on each of 205 subinterfaces – 20 on each of 10 1-gigabit interfaces and five on one 10-gigabit interface. In contrast, many enterprise routers run one or at most a handful of OSPF adjacencies.

We advertised routes to 300,000 networks to the 10G Ethernet subinterfaces and 20,000 more routes on the gigabit Ethernet side. For context, consider that the largest production OSPF networks in North America handle OSPF databases of 50,000 routes.

Even with all these conditions in place, the ASR delivered line-rate performance with midsized and large-sized packets.

Table showing Cisco ASR 1006 performance

With minimum-length 64-byte Ethernet frames, the ASR's throughput topped out at around 10.4 million packets per second (mpps), or around 35% of line rate. That's slightly higher than the ESP20 module's rated capacity of 10 mpps, but both this and the line-rate numbers with midsize and large packets represent system limits.

Cisco supplied the ASR 1006 with SPAs in three of its 12 slots. Adding more ports won't increase aggregate bandwidth or packet-per-second performance, at least not with current hardware; 20Gbps throughput and 10.4 mpps is as fast as current ESP modules will go. Thus, oversubscription of up to 6:1 is possible with current line cards and ESP modules. That's not necessarily a showstopper – many enterprises never come anywhere close to fully utilizing a fully loaded ASR 1006 – but it is something to bear in mind when doing capacity planning.

Average unicast latency was low and consistent with small and large packets, but jumped up into the millisecond range with mid-length packets – a significant delay even in a WAN context. RFC 2544, the industry standard methodology for router testing, requires latency to be measured at, and only at, the throughput rate. Cisco notes that delay is far lower (around 88 microsec) with an offered load just 1% less than the throughput rate.

When handling multicast traffic – important for video and collaborative applications – the ASR turned in excellent numbers. In our tests, emulated hosts on each of 200 subinterfaces joined 200 multicast groups, each of which had 50 transmitters on one 10G Ethernet interface. Running protocol independent multicast-sparse mode (PIM-SM), the ASR router thus had to replicate incoming packets from each of 50 sources 200 times, for a total of 10,000 multicast routes.

The router forwarded multicast packets of all three sizes at line rate. Latency was significantly higher than with unicast traffic, mainly because of replication and "fanout" (the number of destination interfaces). However, the multicast delay numbers are generally in line with other high-end switches and routers we've tested.

IPSec tunnel capacity

We also validated the ability of the ASR 1006 to handle 2,000 concurrent IPSec tunnels, fielding both encrypted and a mix of encrypted and cleartext traffic. We connected a pair of ASR 1006s using a Cisco Catalyst 7604 as an intermediate router. One ASR emulated a headquarters router at a large enterprise while the other emulated 2,000 remote "sites."

We offered cleartext frames from Spirent TestCenter from the remote "sites" bound for networks at headquarters, and used a packet sniffer to verify that the ASRs put all traffic into 2,000 unique IPSec tunnels. As is common with tests of security devices, throughput was significantly lower than with cleartext traffic alone because of the extra processing required for encryption and authentication.

Throughput for 64-, 256- and 1400-byte frames was equivalent to 14%, 41% and 81% of line rate, respectively – far lower than the line-rate results we saw for midsized and large packets in the unicast tests.

But lower crypto performance doesn't mean lower overall performance. We retested IPSec with a mix of encrypted and cleartext traffic. This time, aggregate throughput was essentially line rate in both directions. This suggests enabling encryption won't cause any performance penalty for other traffic.

High availability

We assessed high-availability and resiliency features with four sets of failover and software installation tests. Since the ESP and RP modules directly handle packets, we conducted separate failover tests of each. Failover was virtually instantaneous with both: The ESP module dropped 408 packets out of more than 600 million offered, for a cutover time of 39 microsec. The RP modules failed over perfectly: They dropped zero packets in the transition from active to standby modules.

Chart showing how the ASR 1006's high availability stacks up

We also measured the time necessary for software upgrades and downgrades of the ASR. These both involve multiple steps, starting with software changes to the ESP and RP modules and then moving onto the SIP (line card) modules.

This was not a truly "hitless" procedure. The SIP modules were not redundant; thus, significant packet loss occurred as we upgraded or downgraded the SIP modules. An upgrade took about nine minutes while a downgrade took eight minutes. As the ESP and RP failover numbers indicate, the downtime is almost entirely attributable to software changes on the line cards.

Cisco noted that the upgrade/downgrade times were a result of not using redundant interfaces in this test. We'd agree that adding redundancy would mitigate or eliminate downtime caused by SIP module software changes. Also, we conducted the high availability tests with 64-byte frames offered at the throughput rate; downtime would have been lower with less heavy traffic loads.

The Cisco 7200 seemed mighty powerful when Cisco introduced it around a decade ago, with what seemed at the time like a speedy CPU and a decadent 256MB of RAM. In the same way, the 40 cores of today's ASR 1000 seem extravagant today. But as enterprises look to replace their aging 7200s – and perhaps consolidate many of them onto a single, more powerful platform – the ASR 1000 series represents a promising option.

Newman is president of Network Test, an independent test lab in Westlake Village, Calif. He can be reached at


Network World gratefully acknowledges the support of Spirent Communications, which made this project possible. Spirent supplied its Spirent TestCenter traffic generator/analyzer for this project, and test engineers Travis Andrews, Mark Hall, Brooks Hickman, Joshua Jansen, Steven Leventhal and Marc Pelletier offered technical support.

NW Lab Alliance

Newman is also a member of the Network World Lab Alliance, a cooperative of the premier reviewers in the network industry each bringing to bear years of practical experience on every review. For more Lab Alliance information, including what it takes to become a member, go to

Copyright © 2009 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022