Aurora will be the first U.S. system to top one exaflop in performance. Credit: Jason Gillman The next step up in supercomputer performance is exaflops, and there is something of an arms race between nations to get there first — although it’s much more benign than the nuclear arms race of the last century. If anything, it’s beneficial because these monster machines will allow all kinds of advanced scientific research. An exascale computer is capable of processing one exaflop, one quintillion (1,000,000,000,000,000,000) calculations of floating point operations per second. That’s about a trillion times more powerful than a consumer laptop. + Also on Network World: Texas goes big with 18-petaflop supercomputer + China has said it will have an exascale computer by 2020, one year sooner than the U.S. Meanwhile, the Department of Energy (DoE) recently awarded $258 million in funding to six companies — HPE, Intel, Nvidia, Advanced Micro Devices and Cray — to work on improving the energy efficiency, reliability and overall performance of a national exascale computer system. Now things have gone one step further, with Intel and Cray announcing they will deliver the first exascale supercomputer at the Argonne National Laboratory, code-named Aurora, with a target delivery date of 2021. Creating the supercomputer is no easy task Aurora was originally set to be released in 2018 as a 180-petaflop project, but apparently the two companies were struggling to meet the deadline. So, they pushed out the contract by three years and expanded it to one exaflop of performance. The original plan for Aurora was for it to be a Cray Shasta system featuring the upcoming Knights Hill co-processor. It is not known whether the exascale Aurora system will use the same components, but given the three-year extension, that seems unlikely. HPC Wire reports that the DoE was not happy with the delay in delivering the pre-exascale Aurora, but cancelling the deal would have been even less appealing because it would mean going through the bidding and procurement process, which is a long, painful process and would have delayed things even further. Advances in high-performance computers always find their way down into mainstream server technology, and this will be no different. As they become more energy efficient and find better ways to share data across nodes, it will benefit everyone. But the delay in the original Aurora shows this isn’t easy. Building these systems, which use power in the megawatts, is complex and time consuming. You can’t just throw nodes at the problem; you need to manage power and infrastructure, which is why even companies as smart as Intel and Cray struggle. Related content news analysis AMD launches Instinct AI accelerator to compete with Nvidia AMD enters the AI acceleration game with broad industry support. First shipping product is the Dell PowerEdge XE9680 with AMD Instinct MI300X. By Andy Patrizio Dec 07, 2023 6 mins CPUs and Processors Generative AI Data Center news analysis Western Digital keeps HDDs relevant with major capacity boost Western Digital and rival Seagate are finding new ways to pack data onto disk platters, keeping them relevant in the age of solid-state drives (SSD). By Andy Patrizio Dec 06, 2023 4 mins Enterprise Storage Data Center news Omdia: AI boosts server spending but unit sales still plunge A rush to build AI capacity using expensive coprocessors is jacking up the prices of servers, says research firm Omdia. By Andy Patrizio Dec 04, 2023 4 mins CPUs and Processors Generative AI Data Center news AWS and Nvidia partner on Project Ceiba, a GPU-powered AI supercomputer The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams. By Andy Patrizio Nov 30, 2023 3 mins CPUs and Processors Generative AI Supercomputers Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe