Liquid cooling: The next step?

* Liquid cooling systems

The upcoming final volume of Nemertes' data center research looks at the facilities challenges in the data center. Power and cooling were clearly the “hot” (pun intended) topics for this year.

We’ve discussed strategies for both power savings and cooling in this newsletter. One question we are often asked by our clients is, “Are servers going to keep getting hotter?”

Clearly, chip vendors and server vendors have focused on reducing power input and heat output in the last few years. As data center gear purchases catch up with the current “generation” of CPUs, heat and power loads are still increasing. Even if we are reaching a plateau of heat output, the plateau is pretty high.

Liquid cooling is increasingly seen as the inevitable next step. Sixty percent of participants in our research are currently investigating liquid cooling systems (above rack, in-row or in-rack - see below). Fewer than 10% are implementing liquid cooling solutions today.

New standards for liquid cooling systems may spur faster adoption. Last month, the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) published a book on data center liquid cooling that could promote standardization of the technology.

One thing is for sure: Demand for processing will keep increasing, and heat is an unavoidable by-product of any computation.

To be pedantic, liquid cooling is already present in almost all data centers: chilled water systems are used for conditioning of the ambient air. When we talk about liquid cooling nowadays, most people are referring to liquid cooling at the rack or even server level. Here’s a liquid cooling primer:

* Water is more than 3,500 times more efficient than air in transporting heat (thermal carrying capacity).

* Chemical refrigerants can be used instead of water. One advantage: They evaporate and do not cause floods if they leak.

* Cooling the ambient air in a room is the standard, but it is inefficient.

* Getting cold air to the places where heat is generated requires fans to push the air around. The fans themselves consume a lot of the power that is used for cooling.

* Most data centers already have water pipes feeding wall-mounted A/C. Concerns about leaks can be addressed by water cooling systems that are carefully designed to mitigate leaks.

Many different approaches can be used to get the cooling as close as possible to the hot spots (CPU, RAM, other chips, hard drives):

* A/C units can be placed above the racks (e.g., Liebert solutions) or within the rows of racks (e.g., APC solutions). This approach removes the heat from high-density racks so that the ambient air can maintain a uniform temperature, avoiding hot spots. Chilled water or a chemical refrigerant is used to feed the A/C units.

* A/C units are placed inside the racks themselves (Rittal, Liebert, APC and others). Liquid refrigerant (water or chemical) is delivered with pipes or flexible hoses to the rack where it is used to chill air within the rack.

* Heat exchange units could be placed inside the servers themselves, either chilling air in the server or coming in direct contact with the chips themselves. This approach is not yet easy to apply to commodity systems. It is found occasionally in supercomputers and in do-it-yourself kits for the PC “over-clocking” scene (the PC equivalent of hot-rod cars).

It is likely that over the next decade, more cooling systems will move closer to the chip, eventually coming into direct contact with the chips themselves. Much higher cooling capacity would allow chips to run much hotter. By substituting air with water, there will be no need for fans to push the air around, further reducing wasted power.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2007 IDG Communications, Inc.

SD-WAN buyers guide: Key questions to ask vendors (and yourself)