Cisco Nexus 9000

FIRST LOOK: Cisco Nexus 9000

Powerful new router is cornerstone of Cisco's push to revolutionize data center networking

The Cisco Nexus 9000 series, the fruit of Cisco's Insieme spin-in, is more than another fast router -- it's a change in the way that high-end routers are designed and built.

And, in the very near future, it will be a cornerstone of Cisco’s application-centric infrastructure (ACI), a tighter melding of applications, servers, and network infrastructure than has ever existed. 

We got a first look at Cisco’s newest shipping hardware, the Nexus 9508 chassis and the 36-port 40Gbps line cards, in Cisco’s own labs in San Jose, Calif. Although we weren’t allowed to touch the hardware, we did supervise performance tests that confirmed the awesome throughput of the Nexus 9508.

With Internet-sized packets (1,500 octets), a fully populated Nexus 9508 delivered line speed (just shy of 40Gbps) on each of 288 ports, with zero packet loss, and average latency of 624 nanoseconds (that’s .0006 milliseconds) port-to-port on the same card, or 2050 nanoseconds when crossing from one line card to another. 

When we mixed up the ports a little bit so that every port sent traffic to every other port (meshed throughput testing), the per-port average speed was nearly identical, although latencies jumped by about 50% over the inter-card latency, with a range of 2,412 nanoseconds (for 64-octet frames) to 6,007 nanoseconds (for 1518-octet frames) and a high of 26,928 nanoseconds (for jumbogram frames of 9216 octets). Again, there was no packet loss.

In IP multicast testing, the Nexus 9508 kept up its line-rate, zero-loss performance, whether one port transmitted to 287 others (1 multicast group with 287 receivers) or broken up into 20 groups of 14 ports each. 

These tests were done in Layer 3 mode -- each port was on a different subnet, and the device was routing, not switching the traffic. Cisco doesn’t have the switching code ready quite yet, but claims that when the Nexus 9000 is switching instead of routing, performance will remain at line rate. 

We also looked at power consumption, an important concern for data center managers. The fully-loaded Nexus 9508 chassis with 288 active 40Gb fiber ports draws about 11 watts/port with no traffic at all. With ports connected to fiber and powered on, running a typical Internet traffic load (Ixia’s IMIX), power load increases to about 16 watts/port. That’s a very modest power budget for the speed. 

Bottom line: The Nexus 9000 line is built for speed, and lots of it. This is line-rate, non-blocking, 40Gbps routing at large scale, all the way up to 576 ports of 40 Gbps in the not-yet-announced 16-slot Nexus 9500 chassis. Or, if you prefer it as 10Gbps, which Cisco supports using 40Gbps-to-10Gbps fiber breakout cables, there’s potential for a mind-numbing 1152 10Gbps connections in the announced Nexus 9508 eight-slot chassis, or double that in the 16-slot chassis to come. 

Network managers supporting virtualization who are thinking about 10Gbps links to servers in data centers, with 40Gbps out of top-of-rack switches, can drop in the Nexus 9500 or 9300 as a cost-effective way of boosting bandwidth within the data center.  The Nexus 9000 series also connects directly to Cisco’s Fabric Extender modules, first introduced with the Nexus 5000 and 7000 series, providing a way to deliver over-subscribed 1Gbps and 10Gbps connections in top-of-rack environments. 

What makes Nexus 9500 different

Certainly the Insieme team that designed the Nexus 9500 brought a lot of routing and switching innovation to the table. Making use of commodity switch components where they could (in particular, Broadcom’s Trident II switching chips) and minimizing component count, the 9500 has an elegant design. 

No backplane or midplane constrains the device. Instead, line cards inserted horizontally in the front of the chassis link directly to connectors on fabric modules inserted vertically in the back. The fabric modules handle inter-card communications, and provide scalability. This connection is very specific to the line cards and fabric modules. The junction between fabric module and line card is not a general purpose communications bus. 

If your requirements stretch to the performance limits of the Nexus 9508, drop in up to six fabric modules to get full non-blocking performance across the entire switch. If you don’t need that much throughput, save some money -- those fabric modules have a list price of $16,000 -- and start with just two.

The hardware is also designed for maintainability. Redundant half-wide supervisor modules to handle routing operations and management sit in dedicated slots in the front of the chassis, while redundant switch controller modules go in dedicated slots in the rear. Fans, power supplies, line cards, fabric modules, and almost every other component can be removed and replaced on-the-fly and the whole Nexus 9508 can be broken down into its component parts in just a few minutes. 

The Nexus 9000 operating system, derived from the Nexus 7000 code base, has a number of differences that will appeal to data center managers. VxLAN support has been added, a major requirement for large-scale virtualization. 

In addition, Cisco has cracked open the management interface to dramatically increase the network manager’s ability to automate and extend the behavior of the switch, with multiple configuration APIs, direct access to the underlying BASH shell in the operating system, integration with orchestration tools such as OpenStack and Puppet/Chef, and the ability to add 64-bit Linux containers into the switch.  

All of these are valuable additions in large-scale environments where configuring networks using the CLI is impractical or too error-prone. 

We spent a little time looking at the extended management tools, entirely as demonstrations scripted by Cisco technical team members. When things went right -- which they did more than half of the time -- we saw an impressive array of options to enable different techniques for configuration distribution, control and patching. However, we were reminded that the Nexus 9000 series was not quite ready for production deployment, as management interfaces crashed and we jumped from device to device to get the “right” software build for each of the scripted demos. 

New operating system: ACI

If the Catalyst 6500 is the Swiss-Army knife of switching platforms, serving from the wiring closet, the core, the edge, and everywhere in-between, the Nexus 9000 has a stripped-down profile that really only makes it suitable for data center environments. 

There will never be functional blades for the Nexus 9000 similar to the Catalyst series: firewalls, wireless LAN concentrators, and so on, because of the backplane-free architecture.

In fact, the Nexus 9000 offers even less than the Nexus 7000 in terms of features -- Fibre Channel over Ethernet, MPLS, and Data-Center Interconnect have all been dropped out. 

So, where does the Nexus 9000 series fit into Cisco’s world? Does it overlap with the Nexus 7000 and 500 series? Is it a replacement?

The answer, unfortunately, is “it’s complicated.” 

The Nexus 9500 can run in one of two modes, depending on the operating system loaded and the line cards installed: standalone, or NX-OS mode, which is what Cisco is shipping and what we sort-of tested, and ACI mode. 

When it’s in standalone mode, it’s a bigger, faster, cheaper Nexus 7000, almost. But when it’s in ACI mode, with an entirely different operating system and entirely different line cards, the Nexus 9000 is something else. (See "Who supports ACI and why?")

The Nexus 9000 series, including chassis-based 9500 and fixed-configuration 9300, are the first salvoes in Cisco’s new vision for switching in highly virtualized data centers.  

Most, but not all, of the components of the Nexus 9508 chassis are common to both NX-OS and ACI mode: the chassis, the supervisor cards, power supplies, and fabric modules. But the line cards are different in a critical way: they add custom Cisco chips (Application Leaf Engines, or ALE) that bring both additional smarts and additional buffering to the NX-OS line cards. (Cisco says that ACI-mode line cards can also be used in NX-OS mode, but the additional capabilities of the cards will not be accessible to NX-OS.)

New management system: APIC

Before diving into the cards, it’s important to note that ACI switches and routers are not managed like any other Cisco switches and routers. This isn’t the slight-variation-on-IOS that network managers are getting used to with the new ISR G2 models like the 4451X and the Nexus switch operating system. 

ACI mode is completely different: there is no command line. ACI devices are managed via an ACI Controller, called an APIC (Application Policy Infrastructure Controller), which pushes all significant configurations into the device. The configuration isn’t expressed as a text file as in IOS and it’s children; it’s a completely different animal. There’s no way to configure a Nexus 9000 in ACI mode without an APIC. 

For Cisco switching and routing loyalists, ACI could be a difficult pill to swallow. However, it’s way too early to guess whether network managers will use this wicked-fast hardware in ACI mode or NX-OS mode. 

When the Nexus 9500 and 9300 are running in ACI mode, network architectures are considerably different. The only line card currently available in the Nexus 9508 is a 48-port 1/10-GigE card (both copper and SFP+ variations are available), with four 40-Gbps QSFP+ uplink ports. The fixed-configuration Nexus 9300 will come in two flavors. A 3U 96-port 1/10-10GigE (copper only) version with eight 40-Gbps QSFP+ ports will have an oversubscription ratio of 3:1, assuming that all 96 ports are connected to 10Gbps servers or downstream switches.

For network managers who insist on non-blocking architectures, the other Nexus 9300 is a 2U device, with 48ports of 1/10-GigE (SFP+ for copper or fiber) and 12 40-Gbps QSFP+ ports for an uplink. 

New architecture: Spine-and-leaf

This term “non-blocking” is actually a pretty important part of the Nexus 9000 ACI architecture. While most network managers have been building two or three-tier networks in their data centers (edge, distribution, and core layers), the Nexus 9000 team is pushing for a different approach: spine-and-leaf, which reduces the number of elements between any two devices in the data center to a maximum of three: one leaf switch, the spine switch, and the leaf switch at the other end. When properly configured, and with balanced traffic, a Nexus 9000 data center configuration doesn’t present traffic bottlenecks. 

What’s the big difference between spine-and-leaf and a three-tier architecture? It’s not just that one tier is missing; it’s that every edge switch (called a “leaf” in this architecture) is only two hops away from every other edge switch, because every node in the spine is connected to all of the leaf switches. This means that as networks scale, it’s not just a question of linking up the top-of-rack switch to one upstream device. Instead, the top-of-rack or end-of-row has to connect to all of the spine nodes. Suddenly, all those 40Gbps ports make more sense.

Network managers looking closely at the Nexus 9000 series in ACI mode may find that the physically smaller Nexus 9300 offers a high-enough density of 40Gbps ports to become the network spine. To link network edge (leaves) to a Nexus 9300 spine, Cisco supports top-of-rack switching with Nexus 2000 Fabric Extenders or Nexus 9300 switches, or a Nexus 9500 chassis at end-of-row. This is most appropriate in environments where 1Gbps server connections are still in use. 

One advantage of Nexus 9500 chassis devices is price: Cisco’s cutting component count and increasing use of third-party LAN switching chips from Broadcom (what they call “merchant silicon”) gives them attractive pricing. In the large chassis, Cisco is beating their other product lines on 10/40 Gbps pricing: $490 to $625 a port for 10Gbps copper ports (depending on whether the chassis is half-populated or fully-populated), and $850 to $975 a port for 10 Gbps SFP+ ports -- although don’t forget to add about $1,000 a port for the fiber-optic transceivers if you go SFP+. But that price makes no sense if you want to use the switch with some ports in 1Gbps mode, except as a short-term transition. 

For the 9300 switch, pricing is low enough that some network managers might be able to get a unit or two just to learn more about ACI: $250 a port for 10Gbps copper, and $300 a port for 10Gbps SFP+, again still requiring the additional expensive optics. 

It’s hard to test something that doesn’t exist outside of the labs, so offering a verdict on the ACI mode Nexus 9000 doesn’t make a lot of sense right now. With a barely-baked ACI technology, one announced chassis (the eight-slot) and one configuration of ACI line card (in copper and SFP flavors), the Nexus 9500 and 9300 are not going to win any market share for a long time to come.  No matter how amazing the Nexus 9000 hardware and ACI technology are, and how much they revolutionize data center network fabrics, we’re looking at the very beginning of a long road.

Related:

Copyright © 2014 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022