Skip Links

HPC experts look past petaflop to the exascale

Exascale machines will be super-fast, but how useful will they be?

By , Network World
November 18, 2010 11:02 AM ET

Network World - It took decades for the supercomputing industry to ramp up to petaflop speeds.

But two years after the launch of the first petaflop machine -- capable of performing one thousand trillion calculations per second -- at least seven such supercomputers now exist in the United States, Europe and Asia.  

Microsoft breaks petaflop barrier, loses Top 500 spot to Linux

So what’s next? The exascale, a thousand times faster than petascale computing. This week at the SC10 supercomputing conference in New Orleans, high-performance computing (HPC) experts debated whether the industry will hit an exaflop before the end of the decade and, if so, whether the achievement will have been worth the massive expense. Exascale may be 1,000 times faster than petascale -- but will it be 1,000 times more useful?

The panel of HPC experts generally agreed that the industry will achieve exascale by 2020, but said the first exascale systems will require $1 billion or so in investment and run the risk of being too specialized to solve a wide range of problems.

There is a danger that the $1 billion exascale investment will end up being devoted to applications that don't justify the money spent, experts noted. But a more optimistic view would hold that powerful exascale systems will help cure diseases and solve other problems affecting the entire human population.

Convey Computer chief scientist Steve Wallach referred to "Star Trek" in discussing possible future systems that analyze viruses and bacteria and produce an antidote. "Is that worth a billion dollars? Easy," he said.

Exascale systems could also bolster climate research and improve our ability to respond to disasters such as the BP oil spill. The effects of Mother Nature don't happen on the schedule set by governments and researchers, all the more reason to invest heavily in exascale today, said professor William Gropp of the University of Illinois at Urbana-Champaign.

BP's first efforts to cap the oil spill failed, Gropp noted.

"We should have been able to predict that failure. It was a CFD [computational fluid dynamics] problem," Gropp said. "I don't know if that was an exascale problem, a petascale problem or a lousy software problem. But that was something that did not happen on our schedule."

But politics may get in the way of achieving exascale computing, said panel moderator Marc Snir, also of the University of Illinois and a former IBM researcher.

"Let me be blunt, DARPA [the U.S. Defense Advanced Research Projects Agency] doesn't seem to have any interest at this point in exascale," Snir said. "The international collaborations seem to be moving very slowly."

Power management will be extremely difficult to manage in future exascale systems, the HPC experts said. "In our exascale report we ended up with four major problems: power, power, power and power," said Peter Kogge of the University of Notre Dame, who led a study on technology challenges in achieving exascale computing. 

Kogge also suggested that, for the commercial market, the "sweet spot" will not be exascale computing but rather a petaflop machine that can be housed in a single rack instead of across many. Wallach predicted that in 2020 there will be only about ten computing groups that can hit exascale capacity, and those will likely be "the people who can use a petaflop today."

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News