- Silicon Valley's 19 Coolest Places to Work
- Is Windows 8 Development Worth the Trouble?
- 8 Books Every IT Leader Should Read This Year
- 10 Hot Hadoop Startups to Watch
Network World - Energy consumption in corporate data centers doubled between 2000 and 2005, due in large part to the spreading use of volume servers, according to a new report.
The study was commissioned by AMD and conducted by Jonathan Koomey, a staff scientist at Lawrence Berkeley National Laboratories and a consulting professor at Stanford University.
Koomey found that servers and associated infrastructure, such as cooling and uninterruptible power supplies, in U.S. data centers consumed about 45 billion kilowatt hours of electricity in 2005, accounting for about 1.2% of the country’s electricity consumption, roughly equal to the power drawn by the nation’s color televisions. The electricity costs for the servers and associated infrastructure reached $2.7 billion.
Koomey found that the bulk of the increase, about 90%, was due to the rise in the number of servers, those priced below $25,000 and most powered by AMD or Intel processors. While these systems have become increasingly powerful, energy usage per server has increased only slightly from about 190 watts per server in 2000 to about 220 watts per server in 2005. As a result, the spike in energy consumption is due mainly to the fact that the number of volume servers installed has risen dramatically, jumping from about 12 million in 2000 to 26 million in 2005.
“Mainly this is a story of volume servers becoming much more common,” says Koomey.
In addition, the systems are increasingly dense – with blade servers and multicore machines – making it even more important for organizations to consider energy efficiency in charting data center deployments, Koomey says.
“In the data centers that we’ve done benchmarking on you typically find a third or half of the racks are empty because you’ve got air cooling and a lot of restraints in how many servers you can pack in,” he says.
In one organization the data center had maxed out its energy availability, but with a redesign, it was able to add more servers, Koomey says.
“They changed out the lighting and they moved some of the unnecessary air conditioning units and the fans and other stuff and they were able to increase the number of servers in their server room by 30% while remaining under the same power budget,” he says.
It’s those kinds of issues that make it imperative for IT and facilities teams to work together, says John Fruehe, worldwide development manager for servers and workstations at AMD.
“What was something [IT] never had to worry about in the past now is something they have to work hand-in-hand on with facilities because they’re literally at the end of their rope,” he says.
Koomey and AMD hope the study, titled "Estimating Total Power Consumption by Servers in the U.S. and the World," will help raise the level of conversations that have been ongoing about data center energy issues.
They claim it is the first study to use specifics, and not anecdotal evidence, to measure IT energy usage. The study was put together using IDC data to identify the number and types of servers running in corporate data centers, and it used measured and estimated data to gauge energy usage. It is scheduled to be presented on Friday at an industry stakeholder workshop in the Silicon Valley organized by the Environmental Protection Agency as part of its investigation into data center energy usage as mandated by Congress. The EPA’s data center energy report is expected in June.