HPC experts look past petaflop to the exascale

Exascale machines will be super-fast, but how useful will they be?

High-performance computing researchers consider the billion-dollar investment required to usher in the exascale era.

It took decades for the supercomputing industry to ramp up to petaflop speeds.

But two years after the launch of the first petaflop machine -- capable of performing one thousand trillion calculations per second -- at least seven such supercomputers now exist in the United States, Europe and Asia.  

Microsoft breaks petaflop barrier, loses Top 500 spot to Linux

So what’s next? The exascale, a thousand times faster than petascale computing. This week at the SC10 supercomputing conference in New Orleans, high-performance computing (HPC) experts debated whether the industry will hit an exaflop before the end of the decade and, if so, whether the achievement will have been worth the massive expense. Exascale may be 1,000 times faster than petascale -- but will it be 1,000 times more useful?

The panel of HPC experts generally agreed that the industry will achieve exascale by 2020, but said the first exascale systems will require $1 billion or so in investment and run the risk of being too specialized to solve a wide range of problems.

There is a danger that the $1 billion exascale investment will end up being devoted to applications that don't justify the money spent, experts noted. But a more optimistic view would hold that powerful exascale systems will help cure diseases and solve other problems affecting the entire human population.

Convey Computer chief scientist Steve Wallach referred to "Star Trek" in discussing possible future systems that analyze viruses and bacteria and produce an antidote. "Is that worth a billion dollars? Easy," he said.

Exascale systems could also bolster climate research and improve our ability to respond to disasters such as the BP oil spill. The effects of Mother Nature don't happen on the schedule set by governments and researchers, all the more reason to invest heavily in exascale today, said professor William Gropp of the University of Illinois at Urbana-Champaign.

BP's first efforts to cap the oil spill failed, Gropp noted.

"We should have been able to predict that failure. It was a CFD [computational fluid dynamics] problem," Gropp said. "I don't know if that was an exascale problem, a petascale problem or a lousy software problem. But that was something that did not happen on our schedule."

But politics may get in the way of achieving exascale computing, said panel moderator Marc Snir, also of the University of Illinois and a former IBM researcher.

"Let me be blunt, DARPA [the U.S. Defense Advanced Research Projects Agency] doesn't seem to have any interest at this point in exascale," Snir said. "The international collaborations seem to be moving very slowly."

Power management will be extremely difficult to manage in future exascale systems, the HPC experts said. "In our exascale report we ended up with four major problems: power, power, power and power," said Peter Kogge of the University of Notre Dame, who led a study on technology challenges in achieving exascale computing. 

Kogge also suggested that, for the commercial market, the "sweet spot" will not be exascale computing but rather a petaflop machine that can be housed in a single rack instead of across many. Wallach predicted that in 2020 there will be only about ten computing groups that can hit exascale capacity, and those will likely be "the people who can use a petaflop today."

In terms of power, it may make sense to partner with the mobile industry because of the focus smartphone makers have on battery life, said Microsoft technical fellow Burton Smith, who previously co-founded Cray, a supercomputing vendor.

"There's another community that's rather large compared to poor old high-performance computing, and that’s the mobile space where battery energy is absolutely the most important thing," Smith said. "Maybe there are trends in the computer industry as a whole that might well be leveraged by the extreme-scale computing. Who knows, there might be some common cause there."

Exascale was a common topic in SC10 sessions. Nvidia chief scientist Bill Dally spoke about graphics processing units and their potential role in future exascale machines, and another panel examined how heterogeneous architectures can boost performance, but also introduce challenges such as "low programmer productivity, no portability, lack of integrated tools and libraries, and very sensitive performance stability."

Even if exascale systems are built this decade, that doesn’t guarantee they will be as useful as one might expect.

Supercomputing speed is generally measured by the Linpack Benchmark used by the Top 500 supercomputing sites list

The benchmark is often criticized for not necessarily predicting the usefulness of a system in solving real-world problems.

Wallach jokingly suggested putting an app on the Android mobile OS and iPhones to run Linpack, saying "we could get 100,00 people in the world, hooked up together, and we would have the world's fastest Linpack that no one could exceed for a long time." Such an experiment would demonstrate the "stupidity" of Linpack, Wallach said.

Beyond this hypothetical scenario, Allan Snavely of the San Diego Supercomputer Center noted that supercomputers may not be as useful as their measured speed would indicate if data movement is not architected in the most efficient manner.

When computer systems hide, or abstract, the data hierarchy from programmers, "they write terrible code," Snavely said. Data collection and movement, Snavely continued, starts on microscopes, medical scanning devices and discs, and getting it from these devices to a state of usefulness in HPC architectures is not simple.

"The data moving capabilities of much of the HPC architecture is not significantly greater than what people have in their labs or medical labs," Snavely said. "Data doesn't magically appear on floating points, although if you're on Linpack you might start to believe that."

It will be important to study applications and understand their requirements, so we don't end up with machines that aren't useful enough to justify the investment, Snavely said. That doesn't mean the HPC industry has to build machines specifically for certain applications, but there could be "envelopes of usefulness around machines," that apply each one to broad ranges of applications. The envelopes of usefulness would describe attributes such as memory operations and other measures that would determine whether a computer can be used for a particular application.

The danger, Gropp said, is you might end up with a machine that performs one task extremely well, but can't solve any other problems.

"You might have to have three exascale machines stuck together," he said.

Follow Jon Brodkin on Twitter: www.twitter.com/jbrodkin

Learn more about this topic

Windows powering one in ten HPC clusters, but not top supercomputers

It's official: China wrests supercomputing title from United States

Supercomputing fest will spotlight world's fastest computers, high-performance issues

Join the discussion
Be the first to comment on this article. Our Commenting Policies