Americas

  • United States
Neal Weinberg
Contributing writer, Foundry

A different take on utility computing

Opinion
Apr 26, 20043 mins
Data Center

In IDC’s “good enough computing” scenario, companies might use a variety of other methods to make sure they have enough data center resources at the ready. Those technologies include server consolidation, clustering, partitioning and 64-bit computing on a platform that takes advantage of things like high-speed interconnects, new chip architectures and blade servers.

It’s refreshing when an analyst, or in this case a group of analysts, decides not to follow the herd, as was the case earlier this month at IDC’s annual IT industry briefing.

As you undoubtedly know by now, utility computing is the buzzword of the day. According to the hype, the future is all about autonomic, grid-based, on-demand, pay-as-you-go, adaptive computing environments.

But the analysts at IDC aren’t jumping into that virtual pool just yet. Using market data and survey results as the foundation for their analysis, Crawford Del Prete, Vernon Turner and Mark Melenovsky have come up with an alternative vision for how this whole utility computing thing might shake out.

First, they predict that spending on new servers will grow slowly through 2008, only a 3% compound annual growth rate over five years. Even so, technological advances are bringing customers more bang for their buck. That means if you’re running a data center, processing power isn’t your big problem; you’ll have plenty of that.

Your big problem is managing all those boxes. IDC predicts that spending on new servers will be about $55 billion this year, while spending to manage those servers will be about $95 billion. By 2008, new server spending will have crept past $60 billion, and management costs will have soared to around $140 billion.

Approaching it from a different angle, IDC asked more than 400 customers worldwide to describe what is the most valuable potential benefit of utility computing. The No. 1 answer was lowering IT operating costs.

Which brings us to hardware monitoring and management. IDC breaks down utility computing into three phases: server monitoring and management; automated provisioning; and virtualization and service-level automation.

Over the next few years, customers will address their main concern – high data center management costs – by adopting platform monitoring and management tools. Then they will move into server- and application-level provisioning.

But there won’t be much traction for infrastructure virtualization or service-level automation, at least not in the next several years. IDC’s alternative vision for utility computing is that many companies might never get to virtualization or service-level automation.

In IDC’s “good enough computing” scenario, companies might use a variety of other methods to make sure they have enough data center resources at the ready. Those technologies include server consolidation, clustering, partitioning and 64-bit computing on a platform that takes advantage of things like high-speed interconnects, new chip architectures and blade servers.