Americas

  • United States

Utility computing: Are we there yet?

Opinion
Dec 06, 20054 mins
Data Center

* The idea behind a utility storage environment

Utility computing, the overused and continuously-redefined term that we have heard so much about during the last two years, keeps edging closer to reality.  It is tempting to apply one of Zeno’s paradoxes in connection with this, the one that tells us that by continuously halving the distance to an object we never actually arrive.

This still seems to me to be an appropriate description. We keep getting closer, but we still haven’t really arrived.

The idea behind a utility storage environment is that users only pay for the storage services and assets that they use, with unused parts being made available to other organizations. A large IT organization running under a utility model might allocate unused space on the accounting department’s assets to the engineering department, provisioning additional storage to either group as changing situations require. At a service provider, assets would be billed to the provider’s various clients on an as-needed basis, with all clients paying only for what they use. In both situations, companies are spreading the capital expenses across the various user constituencies, with the different groups’ quarterly expenses varying according to what they use in terms of assets and services.

If this pay-as-you-use-it approach sounds a lot like the way you pay for your home’s electrical and other utilities – well, where did you think the term “utility computing” comes from? Relatively few of us can put up our own power plants, so we all wind up contributing to the building and upkeep of the broadly owned utility, and then pay for what we use.

Many technologies come into play when we talk about utility implementations. In the most sophisticated utility computing implementations, grid computing and storage grids certainly can be expected to provide an important part of the technical foundation, offering storage access that adds compute power and additional throughput capability whenever additional storage is required. The significance of grids in very large environments should be obvious to anyone who has ever suffered through the expansion of a large database or data warehouse and found that, as the data store got larger, access times increased – it of course makes little sense to add storage capacity while at the same time lessening our capability to access the data efficiently. A grid environment is a good way to make sure that as the total tonnage of data increases, service levels for accessibility are maintained.

This of course requires an ability to manage more than just storage; we also have the need to manage connectivity between the various nodes (or cells) of the grid, the computing engines associated with each node, and so forth.  Done incorrectly, this can be a pretty costly operation. Done efficiently however, it need not be. 

The only efficient way to implement grids and utility environments is to automate the management processes that run them. We will be looking at automation next time, and in several editions of this newsletter coming out in the early part of next year.

In the mean time, if this topic interests you, you might also be interested to know that Enterprise Management Associates is conducting its periodic review of IT perceptions of automated system and storage management services. IT readers of this newsletter are welcome to participate in this research. If you are interested in participating in a 10-minute Web survey, please click on either of the two links appearing below. Participants will receive a copy of the final report.

The “Automated Storage Management” survey

The “Automated Systems Management” survey

* * *

Also, remember to send in your suggestions for this year’s storage wish list for Santa. In the best of circumstances the annual wish list can be a useful way to be an activist and let the vendors know what you need. In less positive conditions, it can be a good way to blow off steam.