- Top 10 Recession-Proof IT Jobs
- 7 Hot IT Jobs That Will Land You a Higher Salary
- Link Building Strategies and Tips for 2014
- Top 10 Accessories for Your iPad Air
Network World - Bruce Allen is perhaps the world's best do-it-yourselfer. When he needed a supercomputer to crunch the results of gravitational-wave research, he built one with his colleagues at the University of Wisconsin-Milwaukee.
That was in 1998, and since then he's built three more supercomputers, all in pursuit of actually observing gravitational waves -- they theoretically emanate from black holes orbiting each other and from exploding stars -- that have never been directly observed.
His most recent supercomputer, a cluster of 1,680 machines with four cores each, is in Hanover, Germany. Essentially, it's a 6,720-CPU core processor that in the months after it was built was ranked number 58 in the world. "We filled our last row of racks recently, and we're number 79 on the current top 500 list now," says Allen, the director of the Max Planck Institute.
He builds his own for several reasons, including that he thinks he gets more for his money when he does the work himself.
"If you then go and look at Pricewatch or some other place where you can find out how much the gear really costs, you find out that if you build something yourself with the same money you'll end up with two or three times the processing power."
The problem is big-name companies have a lot of overhead comprised of layers of management and engineering. "They do sell good products, and you don't need to have any particular expertise to buy them," he says. "It's always been my experience that if I do it myself I get more bang for my buck."
For instance, his first supercomputer was built from a Linux cluster of bargain 48 DEC Alpha Servers that had been discontinued, each with a single 300-MHz 64-bit AXP processor. "So I got a very good deal on them. I think the list price was $6,000 and I bought them after they were end-of-lifed for $800," Allen says. "The switch was a 3Com Superstack 100M bps Ethernet switch. I think it was a pair of them, each with 24 ports connected by a matrix cable."
The servers were housed in a room slightly larger than a closet on particle board shelves bought at Home Depot. "It wasn't even racks because rack-mounted systems would have raised the price significantly," Allen says. The whole thing used about 200 watts of power, and the university facilities staff had to remove flaps from the air ducts feeding the room so they could dissipate the heat efficiently enough.
The total cost was about $70,000 he got from the National Science Foundation (NSF). The grant was actually for eight high-end Sun workstations, but he spent it on the Linux cluster instead.
"About a year later I was giving a scientific talk about this, and the two program managers from the NSF came up to me afterwards," he says. "I sort of shamefacedly apologized. I said, 'Well, I hope you're not angry that I went ahead and did this anyway.'