• United States

Digging into utility computing

May 18, 20043 mins
Data Center

* Utility computing is like to old time-shared operating system environment

What is the difference between the computing model we use today and the evolving concept of utility computing? In a very real sense, the differences exist down to the most basic services IT delivers to its users: we are talking here about the way services are delivered, the way IT services and assets are provisioned, and the way IT services are paid for.

It’s called “utility” computing because, as is the case with most municipal utilities, in theory you only use the resources you need, and you only get charged for the resources you use.

The utility metaphor works pretty well. For example, no one expects individual consumers to buy their own power plants or water pumping stations; the cost of the utility infrastructure is shared across the user base.

In like fashion, the IT assets in a utility computing environment are shared among users.  IBM, HP and Veritas each have their own definitions of what the utility model should be, but the common concept they all share is the idea that resources are brought on line when users (or processes) require them, and these assets are returned to a common resource pool when they are no longer needed.  Some vendors supplying pieces of a utility computing model already allow for “chargeback,” the capacity to bill users for the resources they have used.

Conceptually this sounds a lot like the old time-shared operating systems from the days of the super-minicomputers.  Those of you with a sense of history (and for whom the memories of honking big machines from Digital Equipment, Data General and Prime strikes a nostalgic note) will recall that they featured many dumb terminals hooked up to large machines, allowing multiple users to share a single CPU.  How much of the CPU resource you got (usually determined in milliseconds) was determined by your “eligibility,” often set by an administrator.

Utility computing expands this time-sharing model to a much larger scale, of course, but the basic concept – multiple users sharing a common set of resources while managing contention issues – is the same. It is fundamental to the concept of utility computing that users – both human and processes – only have access to the resources they need for a particular job, and that once they no longer need those resources, the resources are released back into the general pool.

The idea of an asset “pool” is crucial.  This column has spoken many times about the fact that, when it comes to storage, working with a single virtualized resource pool is clearly the most efficient way to manage things.  This is because users are able to treat multiple physical storage devices as a single storage entity, or to sub-divide the storage pool as necessary without regard to the limitations of individual physical devices.  The result is that maximized disk space is available.  Naturally, having the storage pool on a network means that multiple users have access to all these resources.

Today, most vendors offer some capability to virtualize storage, which is the crucial first step towards utility computing.  But what else must the utility infrastructure include?   More on that next time.