- Top 10 Recession-Proof IT Jobs
- 7 Hot IT Jobs That Will Land You a Higher Salary
- Link Building Strategies and Tips for 2014
- Top 10 Accessories for Your iPad Air
In addition to providing enormous cloud-based processing power, the service will automatically determine what resources are needed to complete the computing task and link those resources together, adding and dropping processing power as needed in the middle of the task and then disconnecting when it is completed.
This turns traditional use of supercomputing upside down, says Prof. Manish Parashar, of the department of electrical and computer engineering at Rutgers University. Until now, users would have to adapt their tasks to the particular supercomputer they had available and accept limitations, such as how long it would take to complete a task.
With supercomputing as a service, users state their task and software intelligence determines how much computing power they need and then goes out and assembles it virtually.
The technology is so automated that a simple custom application on an iPad can input the task along with key parameters such as the level of accuracy required and time constraints. The cloud infrastructure then compiles the needed resources and kicks off the calculations.
Automated federation of supercomputing clusters that amasses the required resources all takes place in the background, requiring no configuration by the end user.
Parashar and researchers from IBM and the University of Texas at Austin publicly demonstrated the technology recently in an IEEE competition that the team won. The demo pulled together IBM supercomputers at sites in New York state and Saudi Arabia and added and dropped groups of processors as end users altered details of the task.
MORE NETWORK RESEARCH: Follow our Alpha Doggs blog
For example, during the demo, when users requested a faster time to completion of the task, more processing power was brought to bear automatically. Later, users increased the degree of accuracy required for the task, and even more processing power was pulled in.
While the demonstration used only IBM supercomputers, any supercomputer could be added to the resources as long as it has an application programming interface compatible with the cloud-computing engine developed by Parashar.
That engine, called CometCloud, is software that enables on-the-fly federation of disparate supercomputers that can be physically located in public and private clouds, data centers and enterprise grids.
CometCloud has been used to support science, engineering and business applications, but only as a research project. Parashar says he is not certain yet exactly how the service will be commercialized, but he expects that it will be available this fall.
Read more about data center in Network World's Data Center section.