Americas

  • United States

Utility computing comes full circle

Opinion
May 13, 20035 mins
Data Center

* Utility computing sounds familiar

Utility computing is the latest, greatest and most economical enterprise computing model for the next decade, according to CA, HP, IBM, Sun and Veritas.  Now if we could only get them to agree on a common definition.

At the core of utility computing is the idea that IT resources should be used – and billed for – in much the same way that consumers use their power or water utilities, leaving you to pay for only what you use plus, presumably, a small surcharge to maintain the infrastructure.  In the case of the water utility, this means you pay for each gallon or liter of water you use, but the costs associated with building and maintaining the water infrastructure – pipes, reservoirs, and so forth, are spread across the entire user base. 

This should be of interest to enterprise IT sites because they never use all their resources all the time.  In fact, when it comes to storage, they usually even don’t take advantage of a significant subset of what they own.

Broadly speaking, an enterprise IT shop could implement an utility model for storage in one of two ways.

The first model brings us back to an idea that achieved much notoriety but comparatively little success during the last several years: the pay-as-you-go managed service provider (MSP). 

As you will recall, when it came to storage the most prevalent method for MSPs was either for the IT staff to offload storage services such as backup and recovery to a remote site or to maintain storage at the local site but outsource the management of those services.  Some smaller sites have managed to make a go of this by offering clients a competitively advantageous third-party service that is optimized to fulfill certain storage needs.  Most of the larger players however, failed to carve out a niche either because their infrastructure costs were too high or, in at least as many cases, simply because IT organizations were largely unwilling to hand over management of that key data to an outsider that was not a “trusted resource.”

A second model is also available.  In this case we see the utility model managed internally, each company owning its own infrastructure and billing its internal clients according to services used.  This is the “charge-back” mechanism beginning to appear in an increasing number of storage resource management software packages. 

Such an approach can be implemented at a number of levels.  The simplest has IT owning all the resources and charging according to services used.  Again, a general “infrastructure surcharge” is likely to be baked into the pricing scheme.  A more sophisticated methodology maintains the same billing model, but with one major change: vendors put assets in their customers’ shops, but only bill for them for the time the resources are actually used.  At the heart of this is a vendor’s ability to monitor its assets in the field.  Then, when IT runs out of headroom, the management application alerts both the vendor and IT management that it is time to “ante up” and pay for more disk space, processing power, or licenses.

For example, an IT director who buys a storage-area network switch with 64 ports for a growing SAN, but only uses the first 20 ports, would only pay for the ports in use.  As the SAN grows, the vendor might sell the site a key that frees up another block of 10 ports.  And so on. 

Think the concepts here are new?  Think again.

Back in the 1970s, multiple companies would share the cost of a mainframe to a third party that would provide them with time-sharing services (payroll checks and many other applications are still often done this way). This was similar to what many modern ASPs tried to provide, but with one major difference: whereas the old timesharing environments ran batch jobs, the new ones need to provide services in real-time.

In the 1980s the advent of the supermini computer shifted the paradigm.  Superminis from Digital Equipment, Data General and Prime brought timesharing into the corporate IT shop, allowing many users (with predetermined levels of eligibility) to contend for a slice of time on an in-house processor.  Of course the desktop PC put an end to all that by providing each user with nonshared processing and storage, and with more easily determined performance levels, the processing and the data distributed across the enterprise (modern IT managers may feel free to replace the term “distributed” with “scattered”).

Now the vendors are getting ready to turn the technology crank again.  The concept of utility computing adds a further level of refinement to a cost-efficiency model that is constantly changing, and often improving.  What should give us particular hope here is the convergence of a number of new computing paradigms all moving towards maturity at the same time.  Not only do we have utility models, but we also see peer-to-peer and grid applications arising as well.

The planets seem to be coming into alignment.  If that is the case, the promise may be not only one of an infinitely extensible utility computing model, but also of resource management so precise that we will only pay for those parts of the overall computing environment that actually provide us with value.  What a concept.