- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
Network World - Stu Jackson needs CPU cycles - lots of CPU cycles. As IT architect for Incyte Genomics, Jackson designs systems that use computing resources the way a blast furnace uses iron ore. The Palo Alto firm's genomic applications burn up every available CPU resource.
Jackson doesn't need supercomputers, however. He builds his applications for pharmaceutical and biotech firms on computing grids. "For businesses that consume CPU cycles as a raw material, grids make sense in almost every case," he says.
Organizations have spent large sums of money building their computing infrastructures, which primarily consist of computers that spend a lot of time doing nothing. Harnessing those unused CPU cycles to power compute-intensive applications is the driving idea behind grid computing.
A grid computing system is a distributed parallel collection of computers that enables the sharing, selection and aggregation of resources. This sharing is based on the resources' availability, capability, performance, cost and ability to meet quality-of-service requirements.
Grids come in various sizes, from cluster grids that pull workgroup computers into a single system, to those that link clustered computers, to enterprise grids that tie computers in a single organization, to global grids that tie computers from multiple organizations into massively parallel high-performance computing engines.
There also are several types of grids, from the traditional grids that focus on aggregating CPU horsepower, to data grids that move terabytes of data between sites for analysis, to access grids that provide high-performance video conferencing and application sharing between multiple sites. Each grid, no matter the size or type, is tied together with job scheduling and management software.
Avaki, DataSynapse, Entropia and Platform Computing are four companies specializing in grid management and scheduling software. Entropia specializes in linking PCs into parallel-computing grids. The other three focus on high-performance servers and midrange computers. All are building products based on the Open Grid Services Architecture (OGSA), a standard developed by the Global Grid Forum, a trade group seeking to create a common basis for grid computing. In addition to the commercial offerings, the Globus Project has developed an open source grid framework based on OGSA standards.
Hewlett-Packard, IBM and Sun each have developed grid initiatives based on their own hardware. While each has unique elements, all claim allegiance to the OGSA standard. Dan Powers, vice president of grid computing strategy and business development at IBM, says rallying around a standard is a must for the growing grid market. "We didn't need eight different ways to build networks, so we ended up with TCP/IP. We don't need eight different ways to build grids," Powers says.
Grid computing's first moves out of the academic and research arenas have been into compute-intensive applications. Bioinformatics, oil and gas exploration, automotive and aerospace engineering, and financial services industries were among the early corporate adopters.