Cisco impresses with UCS

Integrated server blades, networking and management make UCS a strong contender for fast-growing data centers in this exclusive Network World test

1 2 Page 2
Page 2 of 2

If you've gotten used to advanced security features of Cisco's Nexus 1000V virtual switch in your VMware environment, you won't find them in Cisco UCS, and you'd have to combine UCS capabilities and the 1000V, losing some of the benefits of UCS.

Cisco goes even further and strongly suggests you run the fabric interconnect in "End Host" mode which disables spanning tree, making the UCS domain connect up to your network as if it were a really, really, big host. UCS then can spread the load of different VLANs across all uplinks from the fabric interconnect to the rest of the network. This advice makes it clear who UCS is designed for: not the network manager, but the server hardware manager.

Strict configuration makes for simplified networking

Networking flow in Cisco UCS is very hierarchical and very constrained. Every blade connects Ethernet data, Fibre Channel data, and some out-of-band management traffic, over two private 10Gbps connections. These two connections are internal within the chassis, one from each blade to the two fabric extenders also within the chassis (in the normal case). The fabric extenders connect upwards, out of the chassis, to the fabric interconnects, typically using two ports per fabric extender for a total of four ports per chassis going to two fabric interconnects.

From the fabric interconnects, Cisco UCS connects to the rest of your Ethernet and Fibre Channel network via separate Fibre Channel and 10Gbps Ethernet connections.

Some variation in networking is possible, but not a lot. Cisco has multiple Ethernet cards available for the blades, but most network managers will use the M81KR adapter, code-named "Palo," which presents itself as Fibre Channel and Ethernet NICs to the blade, and has two 10Gbps internal uplink ports.

There's also an Ethernet-only card if you don't want Fibre Channel, which will save you $300 a blade. However, if you're not heavily into Fibre Channel storage, all of the networking integration and many of the provisioning advantages of UCS won't mean anything to you — which suggests that UCS works best in a Fibre Channel environment.

Diagram of Cisco UCS

In other words, if you're using iSCSI or local storage, you're not a great candidate for seeing the advantages of UCS.

When we looked at UCS last month, the fabric extender was limited to the 2104XP, which has eight internal ports (one for each blade) and four uplink ports to the fiber interconnect, all at 10Gbps. A 2208 model has been announced (along with a matching high-density Ethernet card), with 32 internal ports and eight uplink ports, for the rare environment where 10Gbps is just not enough for a single blade.

The fabric interconnects have also been revised. Cisco originally released the UCS 6120XP and UCS 6140XP, able to handle 20 and 40 chassis ports plus uplink capacity. The current replacement for both is the UCS 6248UP, with a total of 48 ports. Depending on how the rest of your network looks, that would leave you room for 20 to 22 chassis per switch. The unannounced-but-nearly-ready UCS 6296UP would double those numbers, allowing up to 44 chassis, or 352 blades, per UCS domain.

Those maxima are pretty important, because you can't grow UCS domains (that's the word Cisco uses for a combination of fabric interconnects and chassis) beyond two peer-connected fabric interconnects.

If you follow best practice recommendations for redundancy, that means you start with two fabric interconnects (which are clustered into a single management unit), and can have up to about 22 chassis, or 176 blade servers, per UCS domain using released hardware. (Double that if you're willing to wait for the UCS 6296UP to ship.)

All of these configuration guidelines and capabilities make UCS networking a great fit in some environments, but not in others.

If you've had networking configuration and management problems with large virtualization environments or even physical environments with lots of servers, Cisco UCS provides a dramatic simplification by creating a flat distributed switch that reaches all the way down to each guest virtual machine.

If you've been burned by cable management problems, or if the idea of bundling more than 150 servers or 1,500 virtual systems into four racks with 80 internal patch cables and less 10 external patches seems like a good one, then the network density and rollup of UCS will definitely drop your blood pressure. And reduce the likelihood of patching and configuration error.

Is UCS right for you?

After spending a week looking in-depth at Cisco UCS, as we did, it's easy to come away excited about the product. The engineering is solid, the software isn't buggy, and UCS clearly has something to offer to the data center manager. 

On the other hand, UCS is not for everyone. If you've only got a 100 servers in your data center, or if you're not growing racks full of servers every few months, you won't enjoy the management interface, because you're not feeling the pain of deploying servers.

If you're worried about single vendor lock-in for hardware and networking, if you run the same application on 10,000 servers, or if capital costs for servers are a major concern, Cisco UCS won't be very attractive to you.

Cisco UCS is thoroughly modern hardware. The performance (running industry standard benchmarks) in both virtualization and non-virtualization environments is outstanding. Features such as power management, hardware accessibility, and high-speed networking are what you'd want from a server vendor. Although there will always be a lingering concern whether Cisco will stay in the server business, they've shown evidence of continuing innovation and development, and solid commitment from customers up to this point.

The use case for UCS boils down to two advantages: agility, and shrinking provisioning and maintenance time.

Agility because UCS treats server blades the way that SANs treat disk drives, as anonymous elements that are brought into play as needed by the load. Whether you're layering a virtualization workload on top of non-virtualized servers, UCS offers some of the benefits of virtualization at the server hardware layer.

One Cisco staffer called it "VMotion for bare metal." It's not exactly that, of course, but the idea is the same: virtual or non-virtual workloads can be moved around computing elements. This makes it easy to upgrade servers, to manage power, to balance loads around data centers, and to maintain hardware in a high-availability world.

The shrinking of provisioning and maintenance time comes from the management interface. All of the little details of bringing a new rack of servers online, from handling Fiber Channel addressing to virtual or physical NICs, to cabling, to power management, to making sure that every little setting is correct -- they're all taken care of by the UCS management layer, either using Cisco's applications, a multi-domain orchestrator from some third party, or even home-grown tools.

If virtualization is one of the first steps you take to gain a competitive advantage in enterprise computing, then the agility and flexibility that UCS delivers are good second steps.

Snyder, a Network World Test Alliance partner, is a senior partner at Opus One in Tucson, Ariz. He can be reached at

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2011 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
SD-WAN buyers guide: Key questions to ask vendors (and yourself)