How many MIT scientists does it take to build a Linux cluster? Just one, at least in the school's Department of Chemical Engineering.
As part of his post-doctoral research at MIT, Vikram Kuppa, a Ph.D. in chemical engineering, uses several multiprocessor Linux clusters he put together. But he says he spends enough of his hours breaking down the molecular makeup of polymers and putting the chemical structures through virtual stress tests that he has minimal time left for tinkering with Linux kernels, server hardware, network gear and other components that go into the machine clusters he uses.
"I really don't have time to do that," Kuppa says. "I wanted something that was robust and didn't require high maintenance." He says he investigated some free, do-it-yourself clustering packages - such as SCAR, Rocks, OpenMosix and other open source packages.
Even beyond the other commercial Linux clustering products Kuppa has tried, Scyld's Beowulf product was the easiest to install and configure, he says. Instead of having to install the Scyld CD on each node, "you can install it on the master node, then go through a wizard, which asks how many nodes you want to install it on." The Linux image is then configured for however many nodes are indicated. When the cluster comes online, the operating system images are copied to each node and run in memory. The nodes do have hard drives, Kuppa says, but they're not used to store the operating system.
Parallel computing with clustered Linux servers has become a standard tool of the trade for chemical engineers, Kuppa says - just as biologists must be handy with a microscope, geologists posses skills with a drill, and mathematicians need, well calculators. For Kuppa, the cluster works just as a high-powered microscope in some ways, creating visual images of complex structures smaller than a nanometer - 80,000 times smaller than a human hair. "But I'd much rather work with [this technology] than just sit in front of a microscope, any day," Kuppa says.