Quick Thoughts on the New Nexus 5000

Yesterday, Cisco announced the Nexus 5000 series of data center switches. The 5000, along with the Nexus 7000, brings high density 10GIG access to the data center. I did a quick review of 5000 and made some quick notes.

  • It is a wire-rate, low latency (3.2 micro seconds), layer-2 only switch for data centers.
  • It runs NX-OS, just like the 7000, but is missing a few key NX-OS features, like Virtual Device Containers. I am very upset the 5000 does not have VDCs. I think they are a key virtualization feature for data center network design.
  • The 5000 has a loss-less cross-bar switching fabric provided by only two ASICs.
  • FCoE.
  • Supports Data Center Ethernet (DCE) which helps provide the loss-less technology. DCE uses priority flow control (PFC), delayed dropping, and backward congestion notification to provide DCE quality.
  • Cheap 10GIG cabling with the new twinax copper cable which provides a cheap, in-rack 10GIG cabling option from the 5000 to the servers. Traditional SFP fiber connections are supported also.
  • All ports and power entry connections are at the rear of the switches, simplifying cabling and minimizing cable length. This has been a big problem in our DCs since we have sealed cold aisles. In other switches, the server ports are in the back in the hot aisle, but the network switch ports were in the front in the cold aisle. Running cables from front to back required holes in the sealed cold aisles. The 5000 has everything in the back - in the hot aisle - so the cold aisle can remain sealed.
  • Cut through switching in the 5000 provides 3.2 microseconds latency. The switch can send the first bit of a packet on the egress interfaces just 3.2 microseconds after the first bit of the packet was received by the ingress interface. This 3.2-microsecond latency does not change regardless of the overall packet size.
  • Virtual Output Queues (VOQs) not just for every port, but also for each IEEE 802.1p class of service (CoS) queue on every port. So, there's no head-of-line blocking even inside of a QoS queue.
  • Ethernet host virtualizer (EHV) makes the 5000 appear to be a single host to the upper-layer switches (maybe a pair of 7000s). This removes to need for spanning-tree and allows both uplinks from the 5000 to be utilized, instead of one being blocked by spanning-tree like in normal layer-2 switching.

Currently, the only 5000 model is the 5020 which provides up to 56 10GIG ports. 40 of the 10GIG ports are fixed on the chassis. There are three expansion modules that provide more 10GIG or native Fiber Channel ports.

I think the biggest benefit the 5000 will provide is a unified connection to the servers with FCoE. In our DC, we currently have a centralized cabling design. All servers and other devices are cabled back to centralized POD switches - 6509s with hundreds of connections. We also have SAN connections to 9513s. This reduces the amount of switches in the DC, but limits the flexibility for moves. It seems cabinets, power, and servers are moving everyday in our DC. Often, that requires cabling changes whose costs add up quickly. So, we are looking at a top-of-rack design now with distributed access switches. Switches, which are a capital expenditure, are cheaper than cabling changes, which are an immediate OPEX hit. The 5000 would provide us a single 10GIG FCoE connection to each server over which both IP and SAN traffic would flow. From the 5000 there would be separate uplinks to the 6509s for IP and to the 9513s for SAN.

This makes system enrollment, server support, and cabling much simpler. One (maybe two for redundancy) connection to every server and we're done. And since it's top-of-rack it's easy to move the server or the whole cabinet if necessary. DCE and the loss-less fabric provides the performance and stability high-performance Ethernet and Fiber Channel needs. The only limitation I see is no Gigabit support on the 5000, unlike the 7000. So, if the 5000 is at the top of the rack, they'll be a Cisco 4948 right below it to provide normal Gigabit Ethernet access. I would like to see a Gigabit module for the 5000.

More >From the Field blog entries:

Don't Split That OSPF Area

What Goes Into a Written Network Architecture?

I Can Fix Anything With a Tunnel

A Day in the Life....

No Love For Central Office Techs

How to Establish an Architecture Revision Process

  Go to Cisco Subnet for more Cisco news, blogs, discussion forums, security alerts, book giveaways, and more.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2008 IDG Communications, Inc.

IT Salary Survey 2021: The results are in