Making a supercomputer these days is a lot easier than you might think. I even helped build one myself in just one day earlier this month.
The call to create the processing beast came from a group of University of San Francisco graduate students whose goal was to entice hundreds of computer enthusiasts like myself to unite their laptop and desktop systems into a megacomputer powerful enough to crack the list of the Top 500 fastest supercomputers on Earth. Topping that list is Japan's 40-teraflop Earth Simulator.
Armed with my 2.4-GHz Pentium 4 Compaq notebook, I joined roughly 300 people at the USF gym where we were put to work building the so-called Flashmob1 supercomputer.
Overnight, staff and students had transformed the gym into a massive data center. Neat bundles of Ethernet cables lined the edges of the rows of tables that filled the gym. The cables trailed from four Foundry Networks switches that were positioned around the room. An imposing platform stood in the middle of the gym where lecturers and a few students would sit and monitor Flashmob1's beating heart.
After checking our computers with the security monitors at the gym's entrance, each volunteer was given directions to the table that would house their particular speed of machine. "A Pentium 4?" asked my security monitor with a nod of approval. "Go to the back of the room."
There, a hub captain, who was dressed in the staff uniform of a black T-shirt with the Flashmob1 logo, helped me set up my machine. I told him I'd brought my power pack. "Good, you'll need it," he said. "If your laptop ran on battery, it'll last just 10 minutes calculating this benchmark."
He booted up a CD that contained all the software that my machine would run that day. "A supercomputer on a disk" was how it was described by Pat Miller, a computer scientist at the Lawrence Livermore National Laboratory in Livermore, Calif., and a USF lecturer.
Miller seemed relaxed considering he was expecting 1,400 machines to turn up. A system with that many nodes would boast 600 gigaflops of processing power - enough to crunch the benchmark that all Top 500 hopefuls need to run in less than four hours, and put our Flashmob1 supercomputer at the bottom of the next Top 500 list that's published in June.
It was Miller's do-it-yourself supercomputing class that sparked the idea of Flashmob Computing. Inspired by "flashmobs," a recent craze of organizing a group of strangers to turn up at a given place to do something off-the-wall, like singing a song, and then disperse once the deed is done, the students wanted to invite volunteers to donate their machines for the day.
The task was enormous. "The limiting factor is memory size [of the nodes]," Miller said. I thought about my laptop's 512M-byte memory capacity. "We're taking in a lot of boxes, and some could randomly break . . . there could be bad network cables. We're beating on the memory very fast, and the processor would be working all the time." If one of the computers failed during the benchmark, then Flashmob1 would collapse.
By 11 a.m., everyone except hub captains had to clear the gym; a test of the system was about to take place. Experts at HP had advised the organizers to test the benchmark on small groups of computers at a time to weed out the weaklings. For a typical 1,000-node supercomputer, HP says it takes about a month and a half to test the nodes and another three months to do acceptance testing. The Flashmob team had two hours before its self-imposed deadline of 1 p.m. to begin the first of two benchmarks. The supercomputer would be dismantled at 6 p.m., and the best result would be submitted to the Top 500.
Foundry engineers had spent the previous evening setting up and testing the backbone network. The company provided a 10G Ethernet network based on four FastIron 1500 Layer 2/3 modular network switches. Each came loaded with a two-port 10G Ethernet module and six 48-port 10/100 modules, which supported the clients.
Volunteers could watch the Flashmob team at work from several windows. We saw a few people at the platform glued to their monitors, but the other hub captains sat talking and guzzling Red Bull energy drinks.
At 4 p.m., after attending some head-spinning talks about supercomputers, I returned to the gym. The news wasn't good. Bad network interface cards in some machines were causing huge problems.
"Although [the computers' network connections] are meant to be rated 100 Base-T, maybe some of them weren't so high," Miller said. The supercomputer also hit problems when the Flashmob software tried to use the wireless LAN card inside some of the "computers.
The final number of systems brought in by volunteers was 700. In the end, after much testing and trying to pinpoint the network problem, the Flashmob team began a benchmark test at 4:15 p.m., with 256 computers. After 70 minutes, the supercomputer had completed 75% of the calculation before a bad node made it collapse. That test yielded a performance rating of 180 gigaflops. That's not nearly the 600 gigaflops hoped for, but it is quite a system in itself - a supercomputer with that performance could be used to do plasma modeling, said Greg Benson, assistant professor of computer science at USF.
Despite the technical problems, organizers were excited they had achieved one of their goals: to prove that supercomputers can be built using ordinary computers. They envison a time when communities could build mini-Flashmob computers to help solve ad hoc queries. Flashmob computers could help high school students study the ozone hole, or help a group of neighbors predict the outcome of gas leaks.
Both Miller and Benson said they will be involved in other Flashmob efforts with the scientific community, and that many universities across the world had contacted USF with an interest in setting up Flashmob2.
The university had built the Flashmob software to let them pinpoint memory and CPU problems, but in the end, they were hammered by network issues at the client that they hadn't anticipated.
Learn more about this topicWider Net archive
Our collection of stories that go beyond the speeds and feeds of the network and IT industries.