The march toward exascale computers

High-performance computing is reaching new heights, especially in China.

Texas' Stampede supercomputer
Texas Advanced Computer Center

It's good to be near the top of the list.

"No one else can do some of the things we offer," says Sverre Brandsberg-Dahl, head geophysicist at Petroleum Geo-Services (PGS), a global oilfield services and seismic exploration company. His Houston installation is the second-largest commercial entry on the list and #16 overall.

The list is called the Top500, and it lists high-performance computers (HPCs) in terms of raw speed. Jack Dongarra, now a professor at the Center for Information Technology Research at the University of Tennessee, explains that the list started by accident, when he wrote a benchmark (called Linpack) in 1979 based on the time required to solve matrix problems.

90126 web Oak Ridge National Laboratory

The Summit supercomputer at Oak Ridge National Laboratory will have IBM Power9 chips and deliver 200-petaflops of performance when deployed in early 2018.

A friend started gathering the benchmark results and the first full list was published in 1993. It is not a full census of supercomputers, as the list contains only machines whose results have been submitted for inclusion.

"I have no idea on how many don't show up," Dongarra notes. "Obviously NSA [National Security Administration] machines are not counted -- and I'm told that the NSA has some big machines."

In 1993 the largest supercomputer was 60 gigaflops (billion floating point operations per second), and now it's 93 petaflops (quadrillion floating point operations per second), Dongarra notes. "Giga to tera to peta -- each step is three orders of magnitude, so we have gone up six orders of magnitude," he explains. "So the machines are a million times faster than in 1993, for about the same price."

To continue reading this article register now