• United States
by Audrey Rassmussen

Share the power

May 14, 20033 mins
Data CenterIBM

* Virtualization and grid computing

There are a few common themes that are emanating from management vendors right now: automation, virtualization and utility computing. At a recent IBM System Group/Software Group conference, for instance, the major themes were virtualization, automation and business integration.

Utility or grid computing is just one way to virtualize the infrastructure. Last week, at Veritas’ annual Vision Conference, utility computing was a major theme, coupled with automation.  I’ve written about automation in the past, but what are virtualization and utility computing?

Virtualization has been evolving in storage technologies most recently. While not a new concept, virtualization is now being looked at as a mechanism to make more efficient use of scattered and distributed IT resources.

The concept is to manage pools of resource types, such as storage, servers and processing capability, rather than managing each individual piece of technology.

The advantage of managing a pool of resource is that you don’t always care what piece of hardware is running your job, you just want your job completed within an optimal amount of time (which can vary depending upon the criticality of your job).

By using resource pools it opens up possibilities to maximize the efficient use of the resources. For example, you may have smaller amounts of disk capacity available on several servers. When viewed as individual devices, it may look like there isn’t enough room to store a large file. However, when the disk space is virtualized, you have ample room to store the large file with lots of room to spare. The only difference is that your data may be spread across two or more servers, rather than residing on one.      

Grid or utility computing is an emerging technology that allows you to virtualize your resources more broadly. It is one of many technologies for virtualizing resources. One application of grid computing is the use of compute grids, where you may have an application that requires intensive processing that could occur in parallel. Scientific research applications are great examples of this. Human genome research, for instance, requires extensive compute power. Setting up grid computing allows researchers to link together many existing computing resources to tackle the problem, rather than having to buy huge supercomputers.

From an enterprise perspective, grid computing can be a means to virtualize and use the excess capacity that is strewn across your infrastructure. Think of all of the PCs that have excess and idle CPU and disk resources in your infrastructure. Grid computing enables the parsing and distribution of application processing to the “compute grid”, which is your inventory of PCs with excess capacity. When each PC is done processing its part of the job, it sends the work back, where the results are pulled together. So the user doesn’t know that their application was actually processed on 50 different computers, they just know that their job was completed and returned in a reasonable time.

Grid computing is still in its early stages but it is developing. The Globus Project is a cooperative effort to develop grid computing standards, as well as technologies that can enable computation grids. The Open Grid Services Architecture (OGSA) is one of the grid standards that have been developed. Grid computing is challenging because it deals with extremely heterogeneous environments, with pieces of an application running simultaneously on Linux, Windows, and Unix variants.

Although grid computing is still developing, this is a technology that you should watch on the horizon. Virtualization and grid computing could be efficient and cost saving technologies when paired with automation. I guarantee that you haven’t heard to last of these technologies.