- 10 Hot Big Data Startups to Watch
- 11 Unique Uses for Google Glass, Demonstrated by Celebs
- How to Export Your Google Reader Account
- How to Better Engage Millennials (and Why They Aren't Really so Different)
Network World - The oil and gas industry was once the province of the world's fastest supercomputers from makers such as Cray and IBM. But recently, industry heavyweights such as Amerada Hess, British Petroleum, Conoco and Shell discovered that large Linux clusters are capable of tackling the massive computational tasks involved with finding oil.
"Linux clusters are moving in and becoming very competitive in areas where large Unix clusters were used in the past," says Bill Claybrook, an analyst with Aberdeen Group. That's because Linux clusters cost between five to 20 times less than proprietary high-performance computing systems that require small fortunes to acquire and maintain.
"You can probably run 80% of the applications used in high-performance computing just as fast on a Linux cluster and at a much cheaper price," Claybrook says.
Hess migrated from IBM's supercomputer Unix cluster, or SP system, to clusters of inexpensive Linux PCs over the last five years, as the company became more familiar with Linux and saw the financial benefits of making the switch.
The Houston petroleum company uses a cluster of 320 workstations running Red Hat Linux to process 3-D models of underground geological structures used for locating oil reservoirs. The cluster works by breaking up large amounts of mathematical data and distributing pieces of the problem to the nodes, which are a mix of Dell, HP and IBM machines with dual Pentium IV processors with about a gigabyte of memory each.
Each node works on its own part of the model, then returns data to a "master" Linux cluster node attached to a tape drive. The drive then writes the results to tapes, and Hess geological experts analyze the data to locate oil reservoirs.
Jeff Davis, a systems programmer who manages the Linux cluster, says the change has let Hess acquire more computing power at a fraction of the cost of the IBM SP. The SP cost about $1.5 million per year to maintain and run, whereas the company purchased its first 100-node Linux cluster for around $150,000. Yearly maintenance costs for the cluster run about a quarter the cost of the equipment, Davis adds, noting that clusters now can be added for about $100,000.
"The SP was a first-class machine, but you paid for every bit of it," Davis says. "For the most part, these are very reliable machines in the Linux cluster."
SP provided superior uptime -- the SP system had been up for two years straight before it was taken down -- but Davis says the trade-off was acceptable.
"Most of the problems we do have are not due to Linux," he says, referring to reliability issues with PC hardware components in the cluster. That was expected, he adds. "What we're talking about here is going from top-of-the-line server platform to basically desktop machines," he says.