A predicted explosion in power consumption by data centers has not manifested thanks to advances in power efficiency and, ironically enough, the move to the cloud, according to a new report.
The study, published in the journal Science last week, notes that while there has been an increase in global data-center energy consumption over the past decade, this growth is negligible compared with the rise of workloads and deployed hardware during that time.
Data centers accounted for about 205 terawatt-hours of electricity usage in 2018, which is roughly 1% of all electricity consumption worldwide, according to the report. (That's well below the often-cited stat that data centers consume 2% of the world's electricity). The 205 terawatt-hours represent a 6% increase in total power consumption since 2010, but global data center compute instances rose by 550% over that same time period.
To drive that point home: Considerably more compute is being deployed, yet the amount of power consumed is holding steady.
The paper cites a number of reasons for this. For starters, hardware power efficiency is vastly improved. The move to server virtualization has meant a six-fold increase in compute instances with only a 25% increase in server energy use. And a shift to faster and more energy-efficient port technologies has brought about a 10-fold increase in data center IP traffic with only a modest increase in the energy use of network devices.
Even more interesting, the report claims the rise of and migration to hyperscalers has helped curtail power consumption.
Hyperscale data centers and cloud data centers are generally more energy efficient than company-owned data centers because there is greater incentive for energy efficiency. The less power Amazon, Microsoft, Google, etc., have to buy, the more their bottom line grows. And hyperscalers are big on cheap, renewable energy, such as hydro and wind.
So if a company trades its own old, inefficient data center for AWS or Google Cloud, they're reducing the overall power draw of data centers as a whole.
"Total power consumption held steady as computing output has risen because of improvement efficiency of both IT and infrastructure equipment, and a shift from corporate data centers to more efficient cloud data centers (especially hyper scale)," said Jonathan Koomey, a Stanford professor and one of the authors of the research, in an email to me. He has spent years researching data center power and is an authority on the subject.
"As always, the IT equipment progresses most quickly. In this article, we show that the peak output efficiency of computing doubled every 2.6 years after 2000. This doesn’t include the reduced idle power factored into the changes for servers we document," he added.
Koomey notes that there is additional room for efficiency improvements to cover the next doubling of computing output over the next few years but was reluctant to make projections out too far. "We avoid projecting the future of IT because it changes so fast, and we are skeptical of those who think they can project IT electricity use 10-15 years hence," he said.