Skip Links

HPC experts look past petaflop to the exascale

Exascale machines will be super-fast, but how useful will they be?

By , Network World
November 18, 2010 11:02 AM ET

Page 2 of 3

In terms of power, it may make sense to partner with the mobile industry because of the focus smartphone makers have on battery life, said Microsoft technical fellow Burton Smith, who previously co-founded Cray, a supercomputing vendor.

"There's another community that's rather large compared to poor old high-performance computing, and that’s the mobile space where battery energy is absolutely the most important thing," Smith said. "Maybe there are trends in the computer industry as a whole that might well be leveraged by the extreme-scale computing. Who knows, there might be some common cause there."

Exascale was a common topic in SC10 sessions. Nvidia chief scientist Bill Dally spoke about graphics processing units and their potential role in future exascale machines, and another panel examined how heterogeneous architectures can boost performance, but also introduce challenges such as "low programmer productivity, no portability, lack of integrated tools and libraries, and very sensitive performance stability."

Even if exascale systems are built this decade, that doesn’t guarantee they will be as useful as one might expect.

Supercomputing speed is generally measured by the Linpack Benchmark used by the Top 500 supercomputing sites list

The benchmark is often criticized for not necessarily predicting the usefulness of a system in solving real-world problems.

Wallach jokingly suggested putting an app on the Android mobile OS and iPhones to run Linpack, saying "we could get 100,00 people in the world, hooked up together, and we would have the world's fastest Linpack that no one could exceed for a long time." Such an experiment would demonstrate the "stupidity" of Linpack, Wallach said.

Beyond this hypothetical scenario, Allan Snavely of the San Diego Supercomputer Center noted that supercomputers may not be as useful as their measured speed would indicate if data movement is not architected in the most efficient manner.

When computer systems hide, or abstract, the data hierarchy from programmers, "they write terrible code," Snavely said. Data collection and movement, Snavely continued, starts on microscopes, medical scanning devices and discs, and getting it from these devices to a state of usefulness in HPC architectures is not simple.

"The data moving capabilities of much of the HPC architecture is not significantly greater than what people have in their labs or medical labs," Snavely said. "Data doesn't magically appear on floating points, although if you're on Linpack you might start to believe that."

It will be important to study applications and understand their requirements, so we don't end up with machines that aren't useful enough to justify the investment, Snavely said. That doesn't mean the HPC industry has to build machines specifically for certain applications, but there could be "envelopes of usefulness around machines," that apply each one to broad ranges of applications. The envelopes of usefulness would describe attributes such as memory operations and other measures that would determine whether a computer can be used for a particular application.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News