Georgia Tech addressing “Moore’s Law of data centers”

Simulated data center used to examine heating, cooling issues

Georgia Tech researchers are looking to cut by up to 15% the amount of electricity needed to cool data centers that are becoming increasingly jammed with servers and other network gear boasting more powerful processors.

Georgia Institute of Technology researchers are using a 1,100 sq. ft. simulated data center to explore airflow patterns, make temperature readings on systems and more. Fog generators, lasers and infrared sensors are among the tools used to visualize the best setup.

According to the school, a large server cabinet produced 1 to 5 kilowatts of heat five years ago, but versions today would be closer to 28 kilowatts and new machines could generate twice that.

"Some people have called this the Moore’s Law of data centers," said Yogendra Joshi, a professor in Georgia Tech’s Woodruff School of Mechanical Engineering, in a statement. "The growth of cooling requirements parallels the growth of computing power, which doubles roughly ever 18 months."

Advanced thinking from the researchers includes developing algorithms to best match dynamically shifting computer loads to the coolest machines available. Early adopters of virtualization technology have noted the cooling challenges that go hand in hand with it. 

The researchers are also looking at how to best use waste heat removed from the data centers.

For more on network research, follow our Alpha Doggs blog.

Learn more about this topic

Power and cooling: Cisco vs. Enterasys

In Las Vegas, data center takes power and cooling to the limit
Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2009 IDG Communications, Inc.