The complexity of utility computing

* The demands of utility computing on the network


The greater the complexity of your computing ecosystem, the more likely you'll derive benefits from a utility model. If enterprise storage for you really means "enterprise," it is something you should consider.

The emerging utility computing models offer many benefits under a variety of circumstances, but utility computing really shines when demands fluctuate. One hundred users kicking off different processes that run on multiple servers and accessing networked storage in a dynamically changing setting are much more likely to get value from utility computing than 1,000 users in a comparatively static environment.

A utility model assumes that the infrastructure provisions and manages services and assets on an as-needed basis. "On-demand" is the preferred marketing parlance of course, but I am starting to think that perhaps "as-needed" may be a more suitable term in a managed environment. This is because, in a managed environment, "need" would be determined by policy, not merely demand.

It is important to note that in complex environments this is likely to be no trivial task.

By way of example, as the amount of networked storage increases, the impact of the networks themselves takes on an increasingly important role in calculating storage I/O. Thus, as most networks are relatively non-deterministic, storage I/O that was once easily determined on a dedicated storage bus becomes harder to predict on a network. Other traffic on a network impacts your I/O demands, and your I/Os impact the network.

This of course becomes an even more demanding situation as IT resources are reconfigured to meet changing needs. Even if the new pathways are predetermined the task of bringing them on and offline certainly would be a daunting one.

Daunting, but not impossible.

Two technologies would seem to be fundamental to the on-the-fly provisioning that is required here: virtualization and automation, both discussed often in this column. But automation and virtualization of what?

It is a good bet that to get the most out of a complex system you will have to manage more than just the storage.

One simple but useful way to look at IT is to divide it into three broad categories of assets: servers, networking and storage. A completely managed computing utility should be able to virtualize and automatically manage across all three categories, getting the most out of all the components as they operate individually and also as they interact with other systems.

Those of you on the East Coast of the U.S. may remember with varying degrees of fondness Ballantine Beer of years past, with its logo of three interlocking rings symbolizing "purity, body and flavor". Substitute "servers, networks and storage" as names for the three rings, and the place where they all intersect (the "union" for those of you who like set theory) represents the complex interworkings of the three components that make up the IT system. This is what must be managed in order to have a fully functioning utility.

History would seem to support this idea: Many managers have found that the thorniest problems to deal with have been the ones whose root cause lies at an intersections between these categories - those places where, for example, storage and networking intersect but also where tools for root cause analysis do not at present extend.

Certainly, managing individual components within the system is useful, but give this a bit of thought and you will probably agree that managing a single component or a subset of components will always result in sub-optimized operations when you look at the system as a whole. Managing across two subsets as they interoperate with one another is clearly better than managing only one, but coming to grips with the entire complexity of an IT system is surely the best approach.

Alas, it is a given that no single vendor can do it all, and only a few can adequately manage more than one of these three categories. That being the case, the best we can hope for right now is to deal with vendors that can provide us with sub-categories of the utility model - a storage utility, for example, or virtualized server operations that provide an agile approach to using processing power.

This won't address all of the complexity issue of course, and we will still need to ensure the integrity of those points within the system where we have no current ability to manage at present. That is why, when it comes to protecting data, assets and processes, we still have to make sure they operate in as secure an environment as we can provide.

In any computing environment, utility or not, data security will always have to be part of the solution.

Learn more about this topic

Network World on Data Centers Newsletter

Must read: Cisco CEO Robbins: Wait til you see what’s in our innovation pipeline
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies