• United States

America’s first exascale supercomputer set for 2021 debut

News Analysis
Sep 29, 20173 mins
Data CenterServers

Aurora will be the first U.S. system to top one exaflop in performance.

The next step up in supercomputer performance is exaflops, and there is something of an arms race between nations to get there first — although it’s much more benign than the nuclear arms race of the last century. If anything, it’s beneficial because these monster machines will allow all kinds of advanced scientific research. 

An exascale computer is capable of processing one exaflop, one quintillion (1,000,000,000,000,000,000) calculations of floating point operations per second. That’s about a trillion times more powerful than a consumer laptop. 

+ Also on Network World: Texas goes big with 18-petaflop supercomputer +

China has said it will have an exascale computer by 2020, one year sooner than the U.S.

Meanwhile, the Department of Energy (DoE) recently awarded $258 million in funding to six companies — HPE, Intel, Nvidia, Advanced Micro Devices and Cray — to work on improving the energy efficiency, reliability and overall performance of a national exascale computer system.

Now things have gone one step further, with Intel and Cray announcing they will deliver the first exascale supercomputer at the Argonne National Laboratory, code-named Aurora, with a target delivery date of 2021.

Creating the supercomputer is no easy task

Aurora was originally set to be released in 2018 as a 180-petaflop project, but apparently the two companies were struggling to meet the deadline. So, they pushed out the contract by three years and expanded it to one exaflop of performance.

The original plan for Aurora was for it to be a Cray Shasta system featuring the upcoming Knights Hill co-processor. It is not known whether the exascale Aurora system will use the same components, but given the three-year extension, that seems unlikely.

HPC Wire reports that the DoE was not happy with the delay in delivering the pre-exascale Aurora, but cancelling the deal would have been even less appealing because it would mean going through the bidding and procurement process, which is a long, painful process and would have delayed things even further. 

Advances in high-performance computers always find their way down into mainstream server technology, and this will be no different. As they become more energy efficient and find better ways to share data across nodes, it will benefit everyone. 

But the delay in the original Aurora shows this isn’t easy. Building these systems, which use power in the megawatts, is complex and time consuming. You can’t just throw nodes at the problem; you need to manage power and infrastructure, which is why even companies as smart as Intel and Cray struggle.

Andy Patrizio is a freelance journalist based in southern California who has covered the computer industry for 20 years and has built every x86 PC he’s ever owned, laptops not included.

The opinions expressed in this blog are those of the author and do not necessarily represent those of ITworld, Network World, its parent, subsidiary or affiliated companies.

More from this author