Red Hat reaches the Summit – a new top scientific supercomputer

The new supercomputer at Oak Ridge National Labs runs Red Hat Enterprise Linux and heralds a new level of cooperation between vendors.

Red Hat reaches the Summit – a new top scientific supercomputer
Oak Ridge National Laboratory

Red Hat just announced its role in bringing a top scientific supercomputer into service in the U.S. Named “Summit” and housed at the Department of Energy’s OAK Ridge National Labs, this system with its 4,608 IBM compute servers is running — you guessed it — Red Hat Enterprise Linux.

The Summit collaborators

With IBM providing its POWER9 processors, Nvidia contributing its Volta V100 GPUs, Mellanox bringing its Infiniband into play, and Red Hat supplying Red Hat Enterprise OS, the level of inter-vendor collaboration has reached something of an all-time high and an amazing new supercomputer is now ready for business.

Supercomputer designs in the past have been relatively closed, usually involving a single vendor. This multi-year multi-vendor collaboration is setting a significant milestone and providing some other welcome benefits as well — forcing an openness that brings flexibility to the system’s design and relying on a building block architecture that supports a wide range of applications and opportunities for enhancement as well as machine learning.

Why Red Hat?

If you knew that the top 10 fastest supercomputers in the world today all run a variant of Linux, Red Hat’s role in Summit might not be such a surprise. But don’t stop there. The benefit to users of having a familiar OS (many national labs and research centers run Red Hat Enterprise Linux on their systems) makes Summit approachable in a way that older supercomputers have generally not been.

The requirements for flexibility and scalability required for IT operations are considerably more important when it comes to supercomputing with its highly specialized components. Red Hat Enterprise Linux provides stability, support, and its open nature.

The nature of supercomputing

Supercomputing generally entails lots of data and lots of calculations. While I’ve never worked with supercomputers, my brief time in the Physics and Astronomy Department at Johns Hopkins left me with a feel for the enormity off tasks like looking for ways to map the cosmos and studying the nature of subatomic particles. From astrophysics to biology, supercomputers can help to derive answers from dizzying amounts of data, and Summit appears to offer the kind of compute power that will be needed for the world’s most complex problems.

The architecture of Summit

With 4,608 nodes and running at approximately 200 petaflops (10**15 floating point operations per second), Summit is a huge and fairly intimidating system to look at or contemplate. At the same time, its accessibility through a familiar operating system makes it both approachable and flexible.

What to expect

Summit will be offering unprecedented access to technology capable of offering solutions to some of the world’s most pressing problems. Its next-generation workloads may well change the way research is done today — not just broadening scientific knowledge, but providing real-world benefits. Maybe it will help us come to grips with aspects of climate change that are hard to characterize, maybe it will help us find cures for certain types of cancer, and maybe it will point to answers about mankind’s place in the universe.

More information about Summit and Red Hat

To learn more about this incredible technological accomplishment, check out the press release on the Red Hat blog.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Now read: Getting grounded in IoT