The next step up in supercomputer performance is exaflops, and there is something of an arms race between nations to get there first \u2014 although it\u2019s much more benign than the nuclear arms race of the last century. If anything, it\u2019s beneficial because these monster machines will allow all kinds of advanced scientific research.\u00a0\nAn exascale computer is capable of processing one exaflop, one quintillion (1,000,000,000,000,000,000) calculations of floating point operations per second. That\u2019s about a trillion times more powerful than a consumer laptop.\u00a0\n+ Also on Network World:\u00a0Texas goes big with 18-petaflop supercomputer\u00a0+\nChina has said it will have an exascale computer by 2020, one year sooner than the U.S.\nMeanwhile, the Department of Energy (DoE) recently awarded $258 million in funding to six companies \u2014 HPE, Intel, Nvidia, Advanced Micro Devices and Cray \u2014 to work on improving the energy efficiency, reliability and overall performance of a national exascale computer system.\nNow things have gone one step further, with Intel and Cray announcing they will deliver the first exascale supercomputer at the Argonne National Laboratory, code-named Aurora, with a target delivery date of 2021.\nCreating the supercomputer is no easy task\nAurora was originally set to be released in 2018 as a 180-petaflop project, but apparently the two companies were struggling to meet the deadline. So, they pushed out the contract by three years and expanded it to one exaflop of performance.\nThe original plan for Aurora was for it to be a Cray Shasta system featuring the upcoming Knights Hill co-processor. It is not known whether the exascale Aurora system will use the same components, but given the three-year extension, that seems unlikely.\nHPC Wire reports that the DoE was not happy with the delay in delivering the pre-exascale Aurora, but cancelling the deal would have been even less appealing because it would mean going through the bidding and procurement process, which is a long, painful process and would have delayed things even further.\u00a0\nAdvances in high-performance computers always find their way down into mainstream server technology, and this will be no different. As they become more energy efficient and find better ways to share data across nodes, it will benefit everyone.\u00a0\nBut the delay in the original Aurora shows this isn\u2019t easy. Building these systems, which use power in the megawatts, is complex and time consuming. You can\u2019t just throw nodes at the problem; you need to manage power and infrastructure, which is why even companies as smart as Intel and Cray struggle.