Data center density hits the wall

1 2 Page 2
Page 2 of 2

Using such techniques, HP's Gross estimates that data centers can support up to about 25 kW per rack using a computer room air conditioning system. "It requires careful segregation of cold and hot, eliminating mixing, optimizing the airflow. These are becoming routine engineering exercises," he says.

Liquid makes its entrance

While redesigning data centers to modern standards has helped reduce power and cooling problems, the newest blade servers are already exceeding 25 kW per rack. IT has spent the past five years tightening up racks, cleaning out raised floor spaces and optimizing air flows. The low-hanging fruit is gone in terms of energy efficiency gains. If densities continue to rise, containment will be the last gasp for computer-room air cooling.

Some data centers have already begun to move to liquid cooling to address high-density "hot spots" in data centers. The most common technique, called closely coupled cooling, involves piping chilled liquid, usually water or glycol, into the middle of the raised floor space to supply air-to-water heat exchangers within a row or rack. Kumar estimates that 20% of Gartner's corporate clients use this type of liquid cooling for at least some high-density racks.

These closely coupled cooling devices may be installed in a cabinet in the middle of a row of server racks, as data center vendor APC does with its InRow Chilled Water units, or they can attach directly onto each cabinet, as IBM does with its Rear Door Heat eXchanger.

Closely coupled cooling may work well for addressing a few hot spots, but it is a supplemental solution and doesn't scale well in a distributed computing environment, says Gross. IBM's Rear Door Heat eXchanger, which can cool up to 50,000 BTUs -- or 15 kW -- can remove about half of the waste heat from ILM's 28-kW racks. But Clark would still need to rely on room air conditioners to remove the remaining BTUs.

HP's Peter Gross says most IT managers won't want to pay the extra money needed for super-high densities and will look to distribute servers instead of crunching more into the same amount of space.

Closely coupled cooling also requires building out a new infrastructure. "Water is expensive and adds weight and complexity," Gross says. It's one thing to run water to a few mainframes. But the network of plumbing required to supply chilled water to hundreds of cabinets across a raised floor is something most data center managers would rather avoid. "The general mood out there is, as long as I can stay with conventional cooling using air, I'd rather do that," he says.

"In the distributed model, where they use 1U or 2U servers, the power needed to support thousands of these nodes may not be sustainable," Schmidt says. He thinks data centers will have to scale up the hardware beyond 1U or 2U distributed x86-class servers to a centralized model using virtual servers running on a mainframe or high-performance computing infrastructure.

One way to greatly improve heat-transfer efficiency is through direct-liquid cooling. This involves piping chilled water through specialized cold plates that make direct contact with the processor. This is important because as processor temperatures rise, transistors suffer from an increase in leakage current. Leakage is a phenomenon in which a small amount of current continues to flow through each transistor, even when the transistor is off.

Using cold plates reduces processor leakage problems by keeping the silicon cooler, allowing servers to run faster -- and hotter. In a test of a System p 575 supercomputer, Schmidt says IBM used direct-liquid cooling to improve performance by one-third while keeping an 85 kW cabinet cool. Approximately 70% of the system was water-cooled.

Few data center managers can envision moving most of their server workloads onto expensive, specialized supercomputers or mainframes.

But IBM's Bradicich says incremental improvements such as low-power chips or variable-speed fans aren't going to solve the problem alone. Architectural improvements to the fundamental x86 server platform will be needed.

Cost, convergence and economies of scale

Like HP and other IT vendors, IBM is working on what Bradicich calls "operational integration" -- a converged infrastructure that combines compute, storage and networking in a single package. While the primary goal of converged infrastructure is to make systems management easier, Bradicich sees power and cooling as part of that package. In IBM's view, the x86 platform will evolve into highly scalable, and perhaps somewhat more proprietary, symmetric multiprocessing systems designed to dramatically increase the workloads supported per server -- and per rack. Such systems would require bringing chilled water to the rack to meet cooling needs.

But HP's Gross says things may be going the other direction. "Data centers are going bigger in footprint, and people are attempting to distribute them," he says. "Why would anyone spend the kind of money needed to achieve these super-high densities?" he asks -- particularly when they may require special cooling.

IBM's Schmidt says data centers with room-based cooling -- especially those that have moved to larger air handlers to cope with higher heat densities -- could save considerable energy by moving to water.

But Microsoft's Belady thinks liquid cooling's appeal will be limited to a single niche: high-performance computing. "Once you bring liquid cooling to the chip, costs start going up," he contends. "Sooner or later, someone is going to ask the question: Why am I paying so much more for this approach?"

More energy-efficiency tips

Turn on power management. Most servers ship with energy-saving technologies that do things like control cooling-fan speeds and step down CPU power during idle times, but it's not turned on by default -- and many data centers still don't enable it. Consider enabling it by default, except in environments where high availability and fast response times are mission-critical.

Create zones. Break the data center floor into autonomous zones, where each block of racks has its own dedicated power and cooling resources. Zoning involves careful separation of hot and cold air but usually doesn't require that an area be physically partitioned off.

Douse hot spots with closely coupled cooling. A series of high power-density racks can create a hot spot that the room air conditioning system can't handle, or that forces IT to overcool the entire room to address a few cabinets. In those cases, consider supplemental spot-cooling systems. These require piping chilled liquid -- either cold water or glycol -- to a heat exchanger that's either attached or adjacent to a high-density cabinet.

He doesn't see liquid cooling as a viable alternative in distributed data centers such as Microsoft's.

The best way to take the momentum away from ever-increasing power density is to change the chargeback method for data center use, says Belady. Microsoft changed its cost allocation strategy and started billing users based on power consumption as a portion of the total power footprint of the data center, rather than basing it on floor space and rack utilization. After that, he says, "the whole discussion changed overnight." Power consumption per rack started to dip. "The whole density thing gets less interesting when your costs are allocated based on power consumed," he says.

Once Microsoft began charging for power, its users' focus changed from getting the most processing power in the smallest possible space to getting the most performance per watt. That may or may not lead to higher-density choices -- it depends on the overall energy efficiency of the proposed solutions. On the other hand, Belady says, "if you're charging for space, the motivation is 100% about density."

Microsoft's Christian Belady says that once Microsoft began charging for power, users' focus changed from getting the most processing power into the smallest possible space to getting the most performance per watt.

Today, vendors design for the highest density, and most users select high-density servers to save on space charges. Users may pay more for a higher-density server infrastructure to save on floor space charges, even when performance per watt is higher due to extra power distribution and cooling needs. But on the back end, 80% of operating costs scale with electricity use -- and the electromechanical infrastructure needed to deliver power and cool the equipment.

Run 'em hard, run 'em hot

Belady, who previously worked on server designs as a distinguished engineer at HP, argues that IT equipment should be designed to work reliably at higher operating temperatures. Current equipment is designed to operate at a maximum temperature of 81 degrees. That's up from 2004, when the official specification, set by the ASHRAE Technical Committee 9.9, was 72 degrees.

But Belady says running data center gear even hotter than 81 degrees could result in enormous efficiency gains.

"Once you start going to higher temperatures, you open up new opportunities to use outside air and you can eliminate a lot of the chillers ... but you can't go as dense," he says. Some parts of the country already turn off chillers in the winter and use economizers, which use outside air and air-to-air or air-to-water heat exchangers, to provide "free cooling" to the data center.

If IT equipment could operate at 95 degrees, most data centers in the U.S. could be cooled with air-side economizers almost year-round, he argues. And, he adds, "if I could operate at 120 degrees ... I could run anywhere in the world with no air conditioning requirements. That would completely change the game if we thought of it this way." Unfortunately, there are a few roadblocks to getting there. (See "The case for, and against, running servers hotter.")

Belady wants equipment to be tougher, but he also thinks servers are more resilient than most administrators realize. He believes that the industry needs to rethink the kinds of highly controlled environments in which distributed computing systems are hosted today.

The ideal strategy, he says, is to develop systems that optimize each rack for a specific power density and manage workloads to ensure that each cabinet hits that number all the time. In this way, both power and cooling resources would be used efficiently, with no waste from under- or overutilization. "If you don't utilize your infrastructure, that's actually a bigger problem from a sustainability standpoint than overutilization," he says.

What's next

Belady sees a bifurcation coming in the market. High-performance computing will go to water-based cooling while the rest of the enterprise data center -- and Internet-based data centers like Microsoft's -- will stay with air but move into locations where space and power costs are cheaper so they can scale out.

More energy-efficiency tips

Retrofit for efficiency. While new data center designs are optimized for cooling efficiency, many older ones still have issues. If you haven't done the basics, optimizing perforated-tile placements in the cold aisle or putting blankets over cabling in the floor space are good places to start.

Install temperature monitors. It's not enough to monitor the room temperature. Adding more sensors allows better control in the row or rack.

Turn up the heat. The key to raising efficiency is your intake temperatures on the cabinets. The higher the intake temperature, the more energy-efficient the data center. While you probably can't cool an entire cabinet with the room set at 81 degrees at the intake, you probably don't need to be setting the temperature as low as 65, either.

Paul Prince, CTO of the enterprise product group at Dell, doesn't think most data centers will hit the power-density wall anytime soon. The average power density per rack is still manageable with room air, and he says hot aisle/cold aisle designs and containment systems that create "super-aggressive cooling zones" will help data centers keep up. Yes, densities will continue their gradual upward arc. But, he says, it will be incremental. "I don't see it falling off a cliff."

At ILM, Clark sees the move to water, in the form of closely coupled cooling, as inevitable. Clark admits that he, and most of his peers, are uncomfortable with the idea of bringing water into the data center. But he thinks that high-performance data centers like his will have to adapt. "We're going to get pushed out of our comfort zone," he says. "But we're going to get over that pretty quickly."

Robert L. Mitchell writes technology-focused features for Computerworld. Follow Rob on Twitter at http://twitter.com/rmitch, send e-mail to rmitchell@computerworld.com or subscribe to his RSS feed.

This story, "Data center density hits the wall" was originally published by Computerworld.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2010 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
SD-WAN buyers guide: Key questions to ask vendors (and yourself)