Americas

  • United States

How to cool a data center you could bake a pizza in

Opinion
Mar 21, 20063 mins
Data Center

* Data center cooling strategies

Moore’s Law has been the driving force behind computing for decades. Every time pundits heralded the end to Moore’s Law, scientists have crested each performance “hill” to find more performance capacity just beyond. This has brought us today to extreme computing density and heat output.

While chip makers try to increase efficiency, data center managers have to deal with new technologies such as blade servers with “yesterday’s” cooling infrastructure. From a heat-output perspective, data centers are hell on earth: rows upon rows of systems with heat output many times greater than commercial pizza ovens.

How do you meet business needs for computing performance without a major facilities overhaul or new buildings? Let’s look at strategies, from liquid cooling to airflow planning.

Few companies can afford to build a new data center or completely refurbish their existing data centers. By necessity, any strategy for growth and high-density computing will have to be flexible to deal with gradual technology refreshes and different physical facilities constraints (pipes, floor plenum, etc.). Some data centers will be easier to upgrade than others.

Even within a single data center, it is unlikely you will find a homogeneous environment. There may be areas of high density next to low-density racks, perhaps. Growth will not be uniform on all platforms, in all areas, or in all racks. Cooling capacity is also not necessarily uniform. Even though most data centers use forced-air cooling, temperatures will not be even, and computer room AC (CRAC) units have diminished impact with increased distance. Here are two of the strategies we see adopted in data centers:

* Hot and cold aisles – This is one of the most common and arguably successful approaches to high-density cooling. In this model, high-density racks are positioned in two rows, with the racks facing each other. A two-tile aisle between the racks has perforated tiles through which cold air enters the front of the racks. Fans in the racks suck air in and expel it from the rear of the rack. The hot aisles are formed between the rears of two rows of racks. CRAC units at the ends of the hot isles (facing down the isle) suck the hot air and push it under the perforated tiles to complete the cycle.

* Liquid cooling systems – Whether using gas refrigerant or water, liquid cooling allows data center managers to direct the cooling power in pipes instead of via airflow. This allows much more control and resolves the difficulty of controlling airflow. It also allows for much higher cooling capacity to be brought directly to the racks without having to increase the under-floor plenum. Given today’s computing density, of up to 20 kilowatts per rack, forced air would require almost six feet of under-floor space!

The two strategies above are not contradictory. In fact, the best application of liquid cooling would most likely be in a hot/cold aisle configuration, with end-of-aisle CRACs.

An additional consideration is the cost and complexity of running pipes and danger of leaks (which is why gas refrigerant is a better choice). High-density cooling is not easy, but cooling vendors such as Liebert and APC have many innovative options to suit different data center environments and growth needs.

One last thought: Don’t do this on your own. Get a consultant with plenty of real-world experience.

For more on cooling data centers, see this story.

NOTE: Don’t forget to check out the Data Center World spring conference in Atlanta, March 19-23, 2006. The conference is organized by AFCOM, the leading association for data center professionals.