Name that parallel processing machine 100X faster than current PCs -- seriously

Researchers at the University of Maryland have come up with a desktop parallel computing system they say is 100 times faster than current PCs and the kicker is, they want you to name it.

board

That's right, researchers are inviting the public to propose names for the prototype that they say uses a circuit board about the size of a license plate on which they have mounted 64 parallel processors.

To control those processors, they have developed the crucial parallel computer organization that allows the processors to work together and make programming practical and simple for software developers said Uzi Vishkin and the University of Maryland James Clark School of Engineering researchers who developed the machine.

The winner will receive a $500 cash be credited with the naming the technology Vishkin said in a release. Visitors can submit their ideas online at the Clark School of Engineering website, here.

As for the name, it should reflect the features and bold aspirations of the new machine and its parallel computing capabilities, Vishkin said.

Parallel processing on a massive scale, based on interconnecting numerous chips, has been used for years to create supercomputers. However, its application to desktop systems has been a challenge because of severe programming complexities.

The Clark School team found a way to use single chip parallel processing technology to change that.

"Suppose you hire one person to clean your home, and it takes five hours, or 300 minutes, for the person to perform each task, one after the other," Vishkin said. "That's analogous to the current serial processing method.

Now imagine that you have 100 cleaning people who can work on your home at the same time! That's the parallel processing method. "The 'software' challenge is: Can you manage all the different tasks and workers so that the job is completed in 3 minutes instead of 300?" Vishkin continued.

"Our algorithms make that feasible for general-purpose computing tasks for the first time." Vishkin and his team are now demonstrating their technology, which in future devices could include 1,000 processors on a chip the size of a finger nail, to government and industry groups.

A physical look a the processor finds: Clock rate: 75 MHz Memory size: 1GB DDR2 Mem. data rate: 2.4GB/s Number of TCUs: 64(4 X 16) Shared cache size: 256KB (32 X 8) MTCU local cache: 8KB The system consists of 3 FPGA (field programmable gate array) Xilinx chips:2 Xilinx Virtex-4 LX200 & 1 Virtex-4 FX100 Vishkin says the prototype device's physical hardware attributes are strikingly ordinary - standard computer components executing at 75 MHz.

It is the device's parallel architecture, ease of programming and processing performance relative to other computers with the same clock speed that get people's attention.

Vishkin presented his computer last week at Microsoft's Workshop on Many-Core Computing and said "Parallel computing has been a strategic area of growth for computer science and engineering since the 1940s.

So far, parallel computing affected main stream computer science only in a limited way.

The key problem with parallel computers has been their programmability." The parallel algorithms research community has developed a theory of parallel algorithms, for a very simple parallel computation model - the Parallel Random-Access Model (PRAM).

That theory appears to be second in magnitude only to serial algorithmics. However, the evolution of parallel computers never reached a situation where the PRAM offered effective abstraction for them.

So, this elegant algorithmic theory remained in the ivory towers of theorists.

Not only that it has not been matched with a real computer system, there has hardly been an experimental study of whatworks better, more refined performance measurements, and a broad study of applications.

"This system represents a significant improvement in generality and flexibility for parallel computer systems because of its unique abilities," said Burton Smith, technical fellow for advanced strategies and policy at Microsoft in a press release.

"It will be able to exploit a wider spectrum of parallel algorithms than today's microprocessors can, and this in turn will help bring general purpose parallel computing closer to reality." Processors made from new materials or that can reduce power to individual cores as needed were among the innovations presented at Microprocessor Forum 2007, hosted by the technology research firm In-Stat.

For years, chip makers focused on making faster processors, following Moore's Law, named for Intel cofounder Gordon Moore, that processor power would double every two years.

More recently, chip makers have tried to improve energy efficiency, both to lengthen battery life in portable devices and reduce electrical use in servers and other computers.

Electricity not only costs more but generating it causes pollution. Pressure to decrease power use and related carbon emissions is regarded by some as "Gore's Law," so called for environmental activist and former U.S. vice president Al Gore.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Now read: Getting grounded in IoT