Data center cooling is one of the top challenges facing companies. In our recent research benchmark “The New Data Center 2006,” 57% of participants said cooling was their top data center challenges.
These data center managers and IT executives were facing several problems related to cooling such as insufficient capacity, hot spots around high-density racks and an inability to control the distribution of cool air to the right areas in the data center.
There are many different approaches to solving a cooling problem in the data center: computational fluid dynamics simulations of air-flow and temperature, in-row and in-rack cooling, liquid cooling etc. But in the pursuit of a solution, we should not overlook some very effective and very inexpensive solutions that could at least minimize the problem or provide some relief for very little cost.
With cooling, the problem is often not a lack of cooling capacity, but instead an inability to bring that cooling capacity to bear in the right place within the data center. The cooling method used in most data centers is based on controlling ambient temperatures and then pushing cold air through perforated floor tiles by creating a pressure differential. By strategically placing the perforated tiles (“perf-tiles”) in a hot-cold aisle configuration, cold air is fed to the front of a rack and then pulled into and through the rack by fans. The resulting hot air is expelled out of the back of the rack into a hot aisle where it is circulated back to the CRAC (Computer Room AC) units at the periphery of the room.
This approach, while simple enough in theory, is quite hard to apply effectively. For one thing, racks are not all equally hot. Some racks are “denser” than others and therefore need more cooling. Also, the under-floor plenum is often used for cabling and may not have enough space to accommodate the necessary airflow. Finally, the cold air under the floor “leaks” out through cable ducts and other openings, reducing the pressure and diminishing the flow and therefore the cooling efficiency. Companies facing these problems often just try to throw money at the problem - buy more CRACs, invest in liquid cooling, or even build a new data center.
Here are some simpler solutions that cost far less and may be very effective:
* Clean out the rats' nest of cables under the floor. Inventory all cables and remove unused cables to create more space. Use metal brackets and cable ties to tame the rest of the cables into neat bundles so as to further increase the plenum space for air flow.
* Remember, not all perf-tiles are equal. You can vary the amount of airflow by varying the type of perf-tiles you use. Grate-type perforated tiles for example allow much more air to flow through. You can place these strategically in front of dense racks for improved cooling.
* Mind the gap. A lot of cold air ends up leaking in the wrong place because of gaps between tiles, between the floor and the walls and because of open ducts for cables and pipes. Use inexpensive foam inserts or brush grommets offered by many vendors such as ACLok and KoldLok.
* Guide the cold air and keep it “fenced” in. A very inexpensive and innovative solution is the use of under-floor barriers from PlenaForm to corral and direct cold air to where it is needed. If you wall off a section of the data center because you're building a NOC or because it is unused, there is no need to pump air under the floor. Even if there are no perf-tiles you will leak air unnecessarily. You can also use the barriers under the floor to direct the cold air to dense racks and in the ceiling to return it from the hot aisle to the CRAC.
So before you go out and spend tens of thousands of dollars on a CFD simulation or a new CRAC, see if you can solve or partially mitigate your cooling problem with some foam, some plastic barriers or some cable housekeeping. You could end up spending only a few tens or hundreds of dollars if you use some smarts and “cheap chills” instead.