How three enterprises have benefited from data center efficiencies

First-hand looks at how space, power and cooling efficiencies have improved data center operations at Bowne, Digital Realty and Unisys

1 2 3 Page 2
Page 2 of 3

Space savings: How Bowne & Co. used more efficient cooling and power systems to help drop data-center size requirement by 60%

When it comes to data center operations, space is money. That's why Bowne focused on reducing the footprint when it built a new data center in West Caldwell, N.J., in the fall of 2006.

Bowne has a storied history. Founded in 1775, the company is the world's oldest financial printing company, churning out annual reports, stock certificates and other financial and regulatory documents. The New York City firm has 3,200 employees in 60 offices around the globe.

See another case study: Bryant University drops energy costs by 20%.

Bowne operates three data centers in the United States and one in Europe. Two act as primary data centers and two are secondary facilities.

The footprint for the West Caldwell data center is 60% smaller than its predecessor's. Infrastructure enhancements made this possible, says Louis Bomentre, Bowne's director of facilities critical infrastructure: "Our New Jersey data center incorporates precision cooling, support for high-density environments, newer HVAC systems and power distribution, and hot aisle/cold aisle technology. That's what allowed us to collapse our data center into a smaller footprint."

With a smaller data center, however, cooling becomes a bigger challenge, Bomentre says. "Smaller-footprint data centers . . . become more voracious. Their appetites for energy and cooling multiply. You have to really focus on hot aisle/cool aisle containment and power distribution local to the rack," he says.

Rather than cooling the entire computer room, Bowne chose to cool the IT equipment directly. The new data center has precision cooling at the rack level, and hot-air containment at the rack level.

"Our precision, in-row cooling units are modular," Bomentre says. "We only buy the amount of precision cooling that is required. We're not cooling above and beyond what the IT equipment is producing. Even our chiller system is expandable. We've designed it in 10-ton increments, so we can add a 10-ton chiller and a 10-ton condenser when we need it."

Bowne also has saved on power costs by deploying American Power Conversion's Symmetra PX systems, which are expandable, energy-efficient UPSs. "We only purchase the amount of power required by the business today. We're never oversizing, buying more than we need or buying for tomorrow," Bomentre says.

Bowne monitors the power consumption of its UPS systems and racks, as well as the heat-load distribution of its precision cooling system.

"We're preparing that data to where it can become useful to benefit Bowne's bottom line," Bomentre says.

For 2008, Bomentre is evaluating green sources of electricity for his West Caldwell data center. "We're looking into deregulation so we can purchase our power from green providers. With our savings, we're looking to offset our carbon footprint to become a neutral environment," he says.

Measured savings: How Digital Realty keeps power costs under control at 70 data centers

Digital Realty Trust makes a business of managing energy consumption and efficiency at its 70 data centers in North America and Europe. Focusing on metrics to help reduce the bottom line is a key for the San Francisco-based company, which owns 12 million square feet of data-center space.

Measure your data center's energy efficiency. Click for tool.

Digital Realty provides security, power and cooling in its data centers, and customers, such as Savvis and United Layer, provide their own computing power. Digital Realty is a leader in green data-center design, having earned the first gold certification from the U.S. Green Building Council in November 2007 for a data center in Chicago.

Digital Realty focuses on one metric for tracking its ongoing data-center operating costs: dollars per kilowatt of power used. "Power and cooling, and the cost of powering and cooling is the business," says Jim Smith, Digital Realty's vice president of engineering. "It used to be that everyone thought about square feet and the cost per square foot. But many large banks and most of the Internet industry, especially Microsoft and Yahoo, are shifting. Now we think of dollars-per-kilowatt when we build a new data center."

Digital Realty also uses dollars-per-kilowatt as the key metric for the capital cost of building a data center, Smith says.

"We use dollars-per-kilowatt as a performance tool internally," Smith says. "The cost per kilowatt is very sensitive to density and to power costs. In superdense environments like Connecticut, New York or San Francisco, it can be very high. In low-density environments like Austin, Texas, my dollars-per-kilowatt is lower."

Digital Realty developed the dollars-per-kilowatt metric in 2006 but has been using it as the key benchmark for its business since mid-2007, Smith says

It's straightforward to calculate dollars-per-kilowatt: Divide the cost of the data center under construction by its kilowatts of UPS capacity, and you come up with the project's dollars-per-kilowatt measurement. For example, a $9 million data center that provides 1000 kilowatts of UPS capacity costs $9000 per kilowatt.

Using this metric, Digital Realty has found it gets better returns on low-density data centers than on high-density ones. Low-density data centers typically have lower dollars-per-kilowatt ratings than do high-density data centers. That's why Digital Realty builds data centers with power densities in the range of 100 to 150 watts per square foot.

"When you go really, really dense, you increase your operational temperature," Smith explains. "If you're operating a data center at 350 or 500 watts per square foot, a failure in your cooling system is an immediate crisis. . . . If you're operating at 250 watts per square foot, there's nothing you can do but turn everything off to prevent thermal runaway. At 100 watts per square foot, it's hard to get to thermal runaway."

Digital Realty records the power consumption and efficiency of its data centers every 15 minutes.

"Measuring [dollars per kilowatt] and building it into our business processes is a competitive advantage because the data center is our business," Smith says. "Our view is that in the very near term, data centers will have to be reporting their carbon footprint. If you don't have measurement tools in place, you could be out of compliance."

Cool savings: Unisys targets data-center air conditioning

Whether they like it or not, IT executives need to become air conditioning experts to reduce their data centers' power consumption. That's because air conditioning and related chillers and humidifiers account for around 45% of most data centers' electric bill, experts say.

Indeed, air conditioning is one of the areas where data center operators can make the greatest gains in energy efficiency. Typically, 30% to 50% of the electricity going into the building makes it to the IT equipment. The rest is used for the power and cooling infrastructure equipment.

"In the physical layer of power and cooling, there are five areas where data center owners can get some real value: high-efficiency UPS, decreasing losses in AC power distribution, close-coupled cooling, scalable power and cooling, and power- and cooling-capacity management software," says Carl Cottuli, vice president of American Power Conversion's Data Center Science Center. (If these air conditioning terms don't make sense to you, see our crib sheet for IT executives going green.) 

Unisys in December 2007 opened a 17,000-sq.-ft. addition to its data center in Eagan, Minn., that includes many of the latest trends in air conditioning systems. The addition's modular design lets Unisys expand its cooling capacity at the same pace as its processing load. Cold outside air is used for cooling.

"Key to our design is the ability to manage the data center, including managing the cooling and managing the power consumption. If we don't have a full load of equipment on the floor, we don't want to have a full load of cooling," says Victoria Bond, director of North America data center infrastructure for Unisys.

Careful modeling

Such techniques will result in the data-center addition consuming 24% less electricity than the original facility, Bond says. Roughly 16% of that efficiency comes from operating an energy-efficient air conditioning system, and 8% comes from using external cold air for cooling.

As Unisys was building the addition, the company used a computational fluid dynamics (CFD) model to select the best locations for its air conditioners and power distribution units. Data-center air conditioning vendor Leibert did the CFD analysis, using its own tool for hot- and cool-spot monitoring.

Unisys also did an energy audit of the original facility -- looking at power consumption and processing load -- before it designed the addition, Bond says.

The goal for the addition was to minimize power consumption, adds Michael Westerheim, facilities site manager for Unisys. "We ended up with four generators and four UPS modules. That allows us some flexibility, so we can run this center most efficiently at 25%, 50%, 75% or 100% load," he explains. "Most data center operators don't know what equipment is coming in when. Our key was to have this facility be as flexible as possible."

Unisys can monitor the addition's computing load and power requirements minute by minute. "We are able to monitor our processing loads over time and to control our infrastructure equipment to match whatever server load we have at any point in time," Westerheim says. "We can change our support equipment -- UPS, generator and our air conditioning configuration -- to match the load."

Westerheim uses a metric called Data Center Infrastructure Efficiency (DCiE), which was developed by The Green Grid industry consortium. DCiE, which is shown as a percentage, is calculated by dividing IT equipment power by total facility power and multiplying the result by 100. The higher the DCiE, the better. Most DCiE ratings run from 30% to 60%.

The new Eagan facility represents the first time Unisys has built a data center with energy efficiency as its goal, and the first time it is using the DCiE metric. Now Unisys wants to apply some of what it has learned to its other data centers, Bond says.

Energy efficiency "is extremely important -- not just power consumption, but also data center footprint and power management. It's how it all comes together that keeps down costs," Bond says.

One hurdle for Unisys is that responsibility for its data centers' power management is largely in the realm of facilities folks, not IT folks. Getting the monthly electric bill in the hands of the CIO "is something we need to focus on," Bond admits. "It's on our agenda for 2008."

Helping others

Inside the data-center addition, Unisys is helping its clients virtualize servers to reduce their footprint and power consumption.

For example, Unisys this fall helped the city of Minneapolis lower the number of its servers from 76 1U servers down to three Unisys ES7000 eight-way servers running VMware. Overall, the city had 128 servers, and 76 of them were considered candidates for virtualization.

"We figured we saved 1 kilowatt per server, for a total of 70 kilowatts," says Joe Helm, a Unisys consultant and black belt in Six Sigma, a quality management technique.

Unisys owns these servers and operates them for the city under an outsourcing contract awarded five years ago.

"Not only does virtualization give us savings at the data center, it also gives us room for growth because our server footprint is reduced," Helm says. "And it gives us flexibility. It used to take us a week or two weeks to stand up a server, but now we can do it in a matter of hours."

Unisys is applying the same strategy to its own IT infrastructure. The company recently analyzed 1,000 of its own servers and says 400 are candidates for virtualization, which will save the company 400 kW.

"There are four things driving this trend: maintenance, footprint, power and cooling," Helm says. "Having three servers vs. 76 servers is better for your data center."

< Return to NDC home: Power: What you don’t know will cost you >

Learn more about this topic

1 2 3 Page 2
Page 2 of 3
SD-WAN buyers guide: Key questions to ask vendors (and yourself)