Skip Links

Researchers power up server clusters

By , Network World
June 30, 2003 12:08 AM ET

Network World - Software being developed by a computer science professor at Duke University could help spur so-called utility computing technology that promises to offer users self-managing systems that grow and shrink in response to user demand.

For about a year and a half, Jeff Chase, associate professor in the Department of Computer Science at the university in Durham, N.C., has worked on software that lets users share clusters of computers by creating multiple virtual systems out of a single physical pool of servers.

Called Cluster-on-Demand, the software separates the entire software environment, including the operating system and applications, from servers. To put it simply, the servers boot via the network and hook into a database that tells them what operating system to run, what software to load, what policies to adhere to and other details.

As a result, the computing resources can be used for whatever applications demand them, Chase says.

"Cluster-on-Demand treats the operating system as a replaceable component that can be configured based on the needs of the user," Chase says. "We treat hardware as generic. . . . We want to allow companies to be able to view their clusters as a multipurpose, modular dynamic resource, rather than as a brittle computing resource bound to specific software environments."

Chase calls Cluster-on-Demand, which is being used to share resources in a 350-computer cluster at Duke, a resource manager for mixed-use clusters. Cluster-on-Demand is the framework that lets those linked clusters be provisioned on the fly according to user demands, he says. The approach also could be applied within corporate data centers, but the question is whether business needs demand it yet.

"The use of this technology could be effective in large, application-rich or computing-intensive enterprise data centers. It might just be too much for the small to midsize environment," says James Barry, CIO at OneUnited Bank in Boston.

Clusters have long been popular in research labs doing heavy-duty number crunching, but they're becoming more common in corporate data centers, where customers are linking multiple low-cost Intel-based servers to get the computing power previously available only with expensive high-end machines.

As a result, industry experts say the work being done at Duke could have significant im

pact on how users manage corporate data centers of the future.

"The idea of data centers being light on their feet with respect to allocating and reallocating resources is very much of interest to IBM," says Bill Tetzlaff, an engineer at IBM who is familiar with Cluster-on-Demand. "It's very basic to the on-demand computing notion and the utility computing notion."

IBM and HP, along with the National Science Foundation, are sponsoring the work at Duke, which Chase notes was inspired by Oceano, an IBM Research project aimed at automating Web server farms.

Even as systems vendors such as IBM with its On-Demand initiatives, HP with its Adaptive Infrastructure and Sun with N1 talk about the promise of utility computing, industry experts say there are many hurdles to overcome before it becomes reality.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News