First, the concept of a unified fabric inside the data center has the ability to finally merge many DC network topologies and technologies. While it's not ready yet, the Nexus series will have FCoE for SAN connectivity, collapsing FC into the Ethernet network. However, I have to hold off judgement to see if this is a true differentiator since FCoE is not ready. It would be nice to have Fiber Channel SAN cards in the Nexus, truly creating a unified fabric (I couldn't find anything saying Cisco was working on that). Second, we all love bandwidth, and the Nexus 7000 brings it. Now, most enterprise DCs will never need this much bandwidth, but it does set the switch up for, in essence, non-blocking transfers even on oversubscribed cards. You are essentially throwing bandwidth at the data center. Proper port assignment for hosts and uplinks could further protect bandwidth levels. A nice idea for the future would be per-port bandwidth reservations like the MDS-series can do. While today's backplane bandwidth is far less, the marketing hype is for 15 Tbps across the whole chassis in the future. The math works out, best as I can tell, like this:
230 Gbps In per slot +230 Gbps Out per slot ---- 460 Gbps x 16 slots for cards in an 18-slot chassis ---- 7360 Gbps (7.3 Tbps) x 2 Future speed enhancements ---- ~15000 Gbps (15 Tbps)
Third, and what I think is most intriguing, is the use of Virtual Output Queues (VOQs) in an Ethernet network to prevent head-of-line (HOL) blocking. HOL blocking can be a serious detriment to switch performance. VOQs, according to wikipedia, are "an input queuing strategy in which each input port maintains a separate queue for each output port". So, instead of a single queue for traffic entering the backplane destined for an output interface, now there is a separate queue on each input port for every output port (yes, that's a lot of queues on a 256 10Gig system). HOL blocking is now gone. The centralized fabric arbiter ensures data from the VOQs can only enter the fabric when the receiving module, with the output interface, is ready to actual serialize the data onto the output wire. The fabric arbiter also ensures fabric module bandwidth is used as efficiently as possible. Finally, there's Ethernet inside the Nexus 7000.
Cisco Nexus 7000 uses a switched Ethernet out-of-band channel (EOBC) for management and control traffic between the supervisors and line cards and between the supervisors and spine cards. On the supervisor modules, Ethernet connectivity is provided using an on-board 24-port Ethernet switch on a chip, with a one 1 Gbps Ethernet link from each supervisor to each line card, each supervisor to each switch fabric card (up to five), and between the two supervisors (Figure 9). Two additional redundant 1 Gbps Ethernet links are used on each supervisor to connect to the local CPU within the supervisor. This design provides a highly redundant switched-Ethernet-based fabric for control and management traffic between the supervisors and all other processors and line cards within the system.
This is not meant to be a perfect score for the Nexus 7000. There are some hardware deficiencies, particular the paucity of line card options. There are only three right now, one of which is the supervisor. If you want SFP gigabit Ethernet you're out of luck. And remember, these aren't cheap. So, the hardware definitely has potential and is setup for the future. Smoothing some rough edges and adding some more options and this will be a powerhouse in DC networks....in 2009.