If you could design a blade server system entirely from scratch, it might look a whole lot like Cisco's Unified Computing System
Encapsulating Cisco's Unified Computing System into a few paragraphs is a daunting challenge. Cisco UCS is quite unlike any other computing platform on the market today, and while there are certainly parallels to existing models, UCS carves a new path through the woods of IT. In order to relay the major differences, it's best to start in familiar territory and compare UCS with a traditional blade infrastructure.
With a "normal" blade infrastructure, you take pieces from every corner of the IT pie -- storage, network, servers, and management -- and put them together. Each blade chassis will have some number of Ethernet and SAN interfaces, either grouped using internal switching with uplinks or dedicated on a per-blade basis, and these interfaces are then connected to a larger Fibre Channel and Ethernet network. Thus, each chassis exists as an island within the datacenter, and each blade exists as an island within the chassis.
[ Peer deep inside the Cisco Unified Computing System in InfoWorld's "Test Center review: Cisco UCS wows." ]
Management frameworks surround these pieces and typically tie them together in some fashion, but the reality is that today's blade infrastructures are more akin to closely grouped banks of separate servers than a bundle or pool. That's where UCS differs significantly.
The UCS model dispenses with fixed ports and internal switching. It removes the smarts from the chassis as well. Each chassis is essentially just sheet metal and a backplane. No switching occurs within a chassis; the chassis is simply an extension of the UCS fabric, which is driven by two redundant Fabric Interconnects. These are not switches, but might be thought of as controllers.
The Cisco UCS 6120XP FI has 20 10Gb Ethernet ports and an expansion slot for 4Gbps Fibre Channel connections to a SAN. Each port can be designated as a server or uplink port, with the chassis connected to the server ports, and the larger LAN connected via the uplink ports. Drop in Fibre Channel connections to your SAN and you're done. Cabling a UCS deployment is extremely simple and requires very few cables per chassis -- up to eight if you need all that bandwidth, but four should be more than enough for most cases.
The fabric is the computer Unlike the traditional model, there are no dedicated Fibre Channel or Ethernet links in the chassis -- everything is Ethernet. When a blade communicates with the SAN via FC, those packets are encapsulated into FCoE (Fibre Channel over Ethernet) and broken out into straight FC in the Fabric Interconnect. When a blade communicates via Ethernet, that's shipped straight out along the same pipes. In this way, Cisco has greatly simplified the overall architecture and makes better use of available bandwidth, whether for network or storage or both.
This architecture has many benefits, the most notable being immense scalability. With each chassis treated like a hot-swap line card in a switch, adding chassis is as simple as plugging them in. Since UCS chassis have no brains, they don't require any configuration. They're also cheap when compared to "smart" chassis from other vendors, since they do not have internal management or switching requirements. Thus, UCS is expensive with one or two chassis, but once you get to three, it becomes significantly cheaper.
Each Fabric Interconnect can handle up to nine redundantly connected chassis with 20Gb connections to each FI, assuming two 10Gb uplinks to the LAN and a Fibre Channel expansion card. That's 72 blades per pair of UCS 6120XP 20-port FIs. The forthcoming UCS 6140 series doubles those ports to address up to 144 redundant blades, all driven from a single set of FIs. If you forgo dual-fabric redundancy and link each chassis with a single 10Gb connection to each redundant fabric, those numbers double. Any way you slice it, UCS is amazingly scalable.
The management is also greatly simplified. There are no server-based management components or external packages required; everything is driven from a single elegant Java GUI or CLI run directly from the Fabric Interconnects themselves. The entire configuration for any UCS implementation is a single XML file that can be copied to a backup location at a whim. Restoring the configuration is equally simple. In fact, the XML API available with UCS makes custom scripting a breeze with anything from Perl to Ruby on Rails.
Also, UCS is completely hierarchical. Rather than building servers, you build service profile templates and service profiles. Profiles are divorced from physical servers and can exist on any blade at any time. Creating profiles for commonly deployed servers is simple, and servers can be built from those templates at a whim. It may take 30 minutes from start to finish to deploy a dozen blades with VMware vSphere, for instance. That's starting at scratch and ending with 12 running ESX servers. The hardest part is dealing with the storage, which is outside of UCS's purview.
Predictions and realityWhen I first learned that Cisco was getting into the blade business, my thoughts were sour. In fact, in response to Cisco's announcement of UCS way back in March I wrote, "Cisco is probably going to be marketing and selling fixed-purpose blades, most likely manufactured by Quanta or another third party, branding them as Cisco devices, and trying to sell them as high-end virtualization platforms. They may have picked a good baseline architecture in the Intel Nehalem foundation, but otherwise, it's still just a blade server with lots of RAM and a tarted-up chassis."
I then went on to predict that, striving to gain some control over the market, Cisco would try to create new standards that would be at odds with the rest of the world. I was both exactly right and exactly wrong. Quanta does indeed manufacture the blade hardware, and they are in fact Nehalem-based blade servers with lots of RAM and a tarted-up chassis. However, Cisco is not trying to push proprietary standards, but is instead working well within established frameworks.
That said, the datacenter paradigm shift represented by UCS is beyond anything I could have imagined at the time. I went into my testing and review of Cisco UCS fully expecting to be underwhelmed and wound up coming away extremely impressed with what Cisco has accomplished.
This story, "How Cisco UCS reinvents the datacenter," and the companion review, "Test Center review: Cisco UCS wows," were originally published at InfoWorld.com. Follow the latest development in Cisco's Unified Computing System, blade servers, hardware, and virtualization at InfoWorld.com.
This story, "How Cisco UCS reinvents the datacenter" was originally published by InfoWorld.
Dell this week extended its arsenal of data center Ethernet switches, highlighted by a 100G device with...
For the first time in company history Amazon.com revealed the financial details of its Web Services...
With all the public cloud storage offerings on the market today, many vendors just want customers to...
Sponsored by Broadview Networks
Sponsored by HP
Living up to John McAdam's legacy as F5 Networks' CEO won't be easy, but Manny Rivelo just might be the...
Most Linux kernel code isn’t developed by who you might think. Here’s a closer look at why this matters.
Project Fi, Google's Wi-Fi and cellular network service, can be described as low-cost, disruptive,...
From the 19th century's first designs to the Tesla models we haven't even seen yet, here's the...