Skip Links

68-degree data centers becoming a thing of the past, APC says

By , Network World
June 12, 2009 03:36 PM ET

Network World - Cooling a data center to 68 degrees may be going out of style, APC power and cooling expert Jim Simonelli says.

Servers, storage and networking gear are often certified to run in temperatures exceeding 100 degrees, and with that in mind many IT pros are becoming less stringent in setting temperature limits.

Servers and other equipment “can run much hotter than people allow,” Simonelli, the chief technical officer at the Schneider Electric-owned APC, said in a recent interview. “Many big data center operators are experienced with running data centers at close to 90 degrees [and with more humidity than is typically allowed]. That’s a big difference from 68.”

Simonelli's point isn’t exactly new. Google, which runs some of the country’s largest data centers, published research two years ago that found temperatures exceeding 100 degrees may not harm disk drives.

But new economic pressures are helping data center professionals realize the benefits of turning up the thermostat, Simonelli says. People are starting to realize they could save up to 50% of their energy budget just by changing the set point from 68 to 80 degrees, he says.

Going forward, “I think the words 'precision cooling' are going to take on a different meaning,” Simonelli says. “You’re going to see hotter data centers than you’ve ever seen before. You’re going to see more humid data centers than you’ve ever seen before.”

With technologies such as virtualization increasingly placing redundancy into the software layer, the notion of hardware resiliency is starting to become less relevant, reducing the risk of over-heating.

Server virtualization also imposes new power and cooling challenges, however, because hypervisors allow each server to utilize much greater percentages of CPU capacity. On one hand, server virtualization lets IT shops consolidate onto fewer servers, but the remaining machines end up doing more work and need a greater amount of cold air delivered to a smaller physical area.

If you're shutting off lots of servers, a data center has to be reconfigured to prevent cooling from being directed to empty space, Simonelli notes.

“The need to consider power and cooling alongside virtualization is becoming more and more important,” he says. “If you just virtualize, but don’t alter your infrastructure, you tend to be less efficient than you could be.”

Enterprises need monitoring tools to understand how power needs change as virtual servers move from one physical host to another. Before virtualization, a critical application might sit on a certain server in a certain rack, with two dedicated power feeds, Simonelli notes. With live migration tools, a VM could move from a server with fully redundant power and cooling supplies to a server with something less than that, so visibility into power and cooling is more important than ever.
The ability to move virtual machines at will means “that technology is becoming disconnected from where you have appropriate power and cooling capacity,” Simonelli says.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News