Inside Cisco?s newest data center, built from the ground up to be driven by UC

There are innovations in every corner of this 10 megawatt, 27,000 square foot, state of the art data center

Cisco bet big on its UCS products for data centers – and now it's going "all in" with a massive, resilient and green data center built on that integrated blade architecture. In fact, the company as a whole is migrating to the year-old Unified Computing System – Cisco's bold entree into the world of computing -- as fast as possible. Plans call for 90% of Cisco's total IT load to be serviced by UCS within 12 to 18 months.

The strategy is most evident in the new data center the company is just now completing in the Dallas/Fort Worth area (exact location masked for security) to complement a data center already in the area. Texas DC2, as Cisco calls it, is ambitious in its reliance on UCS, but it is also forward leaning in that it will use a highly virtualized and highly resilient design, act as a private cloud, and boast many green features. Oh, and it's very cool. (Click here for all the details .)

K8 fence can stop a 15,000 pound truck going 40 mph in one meter.

While the outside of the center is innocuous enough – it looks like a two-story office building – more observant passersby might recognize some tell tales that hint at the valuable contents. Besides the general lack of windows, the building is surrounded by an earthen berm designed to shroud the facility, deflect explosions and help tornados hop the building (which is hardened to withstand winds up to 175 mph). And if they know anything about security, they might recognize the fence as a K8 system that can stop a 15,000 pound truck going 40 mph in one meter.

Ample supply of power, redundant net connections

Another thing that stands out from outside: the gigantic power towers next door, one of the main high voltage lines spanning Texas, says Tony Fazackarley, the Cisco IT project manager overseeing the build. Those lines service a local substation that delivers a 10 megawatt underground feed to the data center, but Cisco also has a second 10 megawatt feed coming in above ground from a separate substation. The lines are configured in an A/B split, with each line supplying 5 megawatts of power but capable of delivering the full 10 megawatts if needed.

Network connections to the facility are also redundant. There are two 1Gbps ISP circuits delivered over diversely routed, vendor-managed DWDM access rings, both of which are scheduled to be upgraded to 10Gbps. And there are two 10Gbps connections on DWDM links to the North Carolina and California data centers, with local access provided by the company's own DWDM access ring. As a backup, Cisco has two OC-48 circuits to those same remote locations, both of which are scheduled to be upgraded to 10Gbps in March.

Flywheel UPSes and no batteries

There are two UPS rooms, each housing four immense flywheel/generator/diesel engine assemblies that together can generate 15 megawatts of power.  The flywheels are spun at all times by electric motors and you have to wear earplugs in the rooms because the sound is deafening, even when the diesel engines are at rest. In the event of a power hiccup, the flywheels spinning the generators keep delivering power for 10 to 15 seconds while the diesel engines are started. Once spun up, clutches are used to connect the diesels to the generators.

All the generators are started at once and then dropped out sequentially until the supply matches the load required.  But the transfer is fast because the whole data center is powered by AC and, because there are no batteries, there is no need to step the current up and down and resynch it as is required when DC battery power is used.  The facility has 96,000 gallons of diesel on premise.

Roof top cooling towers

Cisco uses an air-side economizer cooling design that reduces the need for mechanical chilling by ducting filtered, fresh air through the center when the outside temperature is low enough, says Fazackarley. The design saves energy and money and of course is very green. However, when cooling is required it all starts here with three 1,000-ton cooling towers on the roof of the facility. Water is cooled by dripping it down over a series of louvers in an open air environment and then collected and fed to the chillers in a closed loop.

One of five chillers

Pre-cooled water from the cooling towers is circulated through five chillers (three 1,000-ton and two 500-ton machines), reducing the amount of refrigeration required to cool water in a second closed loop that circulates from the chillers to the air handlers. (The chillers don't use CFC coolant, another green aspect of the facility.)

Massive pumps circulate the cooling fluids from the towers to the chillers, from the chillers to the air handlers

A series of valves activated by cranks spun by chains makes it possible to connect any tower to any chiller via any pump, a redundancy precaution. And on the green side, the chillers have variable frequency drives, meaning they can operate at lower speeds when demand is lower, reducing power consumption.

Air handlers play a key role in the air-side economizer design, making it possible to cool the facility using fresh, outside air.

The air handlers pull in hot air from the data halls and pass it through coils cooled with fluid from the chillers, and then route the conditioned air back to the computing rooms.  But when the outside temperature is below 78 degrees, the chillers are turned off and louvers on the back of the air handlers are opened to let fresh air in, which gets filtered, humidified or dehumidified as needed, and passed through the data halls and out another set of vents on the far side.

Cisco estimates that, even in hot Texas, they will be able to operate in so-called free-air mode 51% of the time, while chillers will be required 47% of the time and 2% of the time they will use a mix of the two.  Savings in cooling costs are expected to be $600,000 per year, a huge win on the balance sheet and in the green column.

When online, DC2 should boast a Power Usage Effectiveness (PUE) rating of 1.25. PUE indicates how much of the power in the data center goes to computing vs. cooling and other overhead.

A roof-top solar array provides electricity for the office spaces

The solar array on the roof can generate 100 kilowatts of power, enough to power the office spaces in the building.  Other green aspects of the facility:
* A heat pump provides heating/cooling for the office spaces.
* A lagoon captures gray water from lavatory wash basins and the like and is used for landscape irrigation.
* Indigenous, drought-resistant plants on the property reduce irrigation needs.

Tony Fazackarley and IT Team Leader James Cribari in a data hall with the racks that will accept the UCS systems.

Note the tiles on the concrete slab that mimic the typical raised floor dimensions.  Air can't be circulated through the floor, but Cisco uses a standard hot/cold aisle configuration, with cold air pumped down from above and hot air sucked up out of the top of the racks through chimneys that extend part way to the high ceiling above the cold air supply. The idea, Cribari says, is keep the air stratified to avoid mixing. The rising hot air either gets sucked out in free-air mode or is directed back to the air handlers for chilling.

UCS racks ready to be outfitted

Power bus ducts run down each aisle and can be reconfigured as necessary to accommodate different needs. As currently designed, each rack gets a three-phase, 240-volt feed.  All told, this facility can accommodate 240 UCS clusters (120 in each hall). A cluster is a rack with five UCS chassis in it, each chassis holding eight server blades and up to 96GB of memory. That's a total of 9,600 blades, but the standard blade has two sockets, each of which can support up to eight processor cores, and each core can support multiple virtual machines, so the scale is robust. The initial install will be 10 UCS clusters, Cribari says.

Network-attached storage will be interspersed with the servers in each aisle, creating what Cribari calls virtual blocks or vBlocks. The vBlocks become a series of clouds, each with compute, network and storage.

The data center is scheduled to be turned over to the implementation team in early December and come online in March or April.