The immensely powerful supercomputers of the not too distant future will need some serious fault tolerance technology if they are to fulfill their promise of ingenious research.
That’s why the U.S. Department of Energy ‘s Office of Advanced Scientific Computing Research this week said it is looking for “basic research that significantly improves the resiliency of scientific applications in the context of emerging architectures for extreme scale computing platforms. Extreme scale is defined as approximately 1,000 times the capability available today. The next-generation of scientific discovery will be enabled by research developments that can effectively harness significant or disruptive advances in computing technology.”
+More on Network World: DARPA demos lightweight, 94GHz silicon system on a chip+
According to the DOE, applications running on extreme computing systems will generate results with orders of magnitude higher resolution.
“However, indications are that these new systems will experience hard and soft errors with increasing frequency, necessitating research to develop new approaches to resilience that enable applications to run efficiently to completion in a timely manner and achieve correct results,” the agency stated.
Today, the DOE says that 20% or more of the computing capacity in a large high performance computing facility is wasted due to failures and recoveries. The situation is expected to worsen sharply as systems increase in size and complexity, wasting even more capacity. Research is required to improve the resilience of the systems and the applications that run on them, the agency stated.
What the DOE says the research it is looking for focuses on three things:
+More on Network World: Coolest house in the world: A Boeing 727+
The DOE states that a variety of factors will contribute to increased rates of faults and/or errors on extreme scale systems a few of which are
- The number of components with both memory and processors will increase by an order of magnitude, and the number of system components is increasing faster than component reliability, resulting in an increase in hard and soft errors;
- Constraining hardware and software to a power envelope of 20 Megawatts will necessitate operating components at near-threshold power levels and power levels may vary over time, making errors more likely; and
- Use of the machines will require managing unprecedented parallelism and complexity, especially at the node level of extreme scale systems, increasing the likelihood of programmer errors.
Check out these other hot stories: