Cisco Subnet An independent Cisco community View more

Complete End-to-End Nexus Data Center Design ... (Ok, almost end-to-end!!!)

As I mentioned in a blog a couple weeks ago about the Cisco ASR line, we are building a new data center right now using the Cisco Nexus 7000 and 5000 series switches. The lead network engineer on my team, Kamal Vyas, has responsibility for the several parts of the DC network design, particularly the main, internal data center network. He joins us this week as guest blogger with the following post.


With introduction of Nexus 2K (FEX), Cisco now has a complete end-to-end Nexus solution for data center networks. And perfect timing, since this week, at Cisco's Customer Proof of Concept Labs (CPOC), we got the opportunity to test out an end-to-end Nexus DC design comprising of Nexus 7K (Aggregation), 5K and 2K (Access) switches. Services Modules along with Core Switches leveraged the flagship 6509s with ASRs at the WAN edge. Yup...Scary huh!!! Tell me about the concerns of introducing so many new devices and code types (not flavors) in the DC Network. We are using NX-OS at aggregation and access, IOS XE at the WAN edge, and IOS at the core and services level. Most of them in their early deployment cycles and with limited inter-op testing. The products have complete diversity as well. The 7K is manufactured by the Cisco Storage Group, the 5K is manufactured by Nuova (now Cisco), the 6500 part by the good, old ISBU, and the ASR (IOS XE) by the Routing BU. I am sincerely hoping to see unity in this diversity while testing. Hence, more reasons to make sure this gets thoroughly tested in the lab and documented (yup, CYA). Also this needs a lot of help from Cisco cross-BU teams to work in tandem and take this solution close to SRND/DCAP levels. Being in a lab environment, we plan to turn on all bells and whistles including VPC, VDC, VRF, SPT-enhancements and contexts. This is a perfect opportunity to get our hands dirty. Also will help us make intelligent decision as to which options to pick day one and which one we would want to wait a little bit more. Salient Features of the proposed design, to summarize, would be:

  1. Virtual Port Channel (VPC) access design - which by the way is a well thought through design. Just hoping it is implemented equally well with no “bugs”. Thanks to the Cisco BU team for working with us and make the EFT code available even before it is posted on CCO. This could be the much awaited Spanning Tree Killer inside of the DC.
  2. Virtualization and consolidation of services modules inside a separate Services Chassis parallel to the aggregation level.
  3. 10 Gig density at aggregation and access levels to support dense virtualization-ready turbo-charged servers.
  4. FCoE ready network - will just need to wait for servers to pop in CNAs and "go-FCoE-baby".

Over course of my next few blogs will touch upon individual areas of the DC network and provide more details about my experience going through the process. Please feel free to share if you have any comments/suggestions in this regards. Stay Tuned !!!!


More from Kamal in the future about our data center network design and build in March.

More >From the Field blog entries:

Cisco Data Center "Big Bang" Announcement - YYYYYAAAAAWWWWWNNNNNN.....

Now a Look at Cisco IOS XE for the ASRs

Taking a Closer Look at the Cisco ASR 1000 Series

If Someone (like your boss) is Asking You What the CCDE Is....

Passing the CCDE is Starting to Sink In

Holly Crap I Passed the CCDE!!!!

  Go to Cisco Subnet for more Cisco news, blogs, discussion forums, security alerts, book giveaways, and more.

Insider Tip: 12 easy ways to tune your Wi-Fi network
Editors' Picks
Join the discussion
Be the first to comment on this article. Our Commenting Policies