One of the most fascinating and highly technical conferences in the IT industry will take place next week in New Orleans, when SC10 kicks off with seven days of big computers, and big solutions to big problems.
The 23rd annual supercomputing conference will include the unveiling of the latest Top 500 list, with a system based in China expected to take over the title of world's fastest computer from the United States.
While last year's SC keynote featured Al Gore arguing that supercomputing can help reverse climate change, this year's show will feature Harvard Business School professor Clayton Christensen, who will discuss the challenges posed to the high-performance computing industry "as it seeks new paradigms to frame its emerging enabling technologies for continued performance growth."
But the real meat of the supercomputing show may be in discussions and demonstrations on the exhibition floor, and technical sessions for attendees. 10,000 people went to last year's conference in Portland, Ore.
Microsoft, which recently claimed that Windows is less expensive than Linux when it comes to building high-performance computing systems, will be at the show and unveil some news about its HPC efforts. (Linux is almost certain to dominate next week's Top 500 list, but don't tell Steve Ballmer).
Caltech,meanwhile, will attempt to break its own world record in data transfer speeds, having last year "established a between Northern and Southern hemispheres, sustaining 8.26Gbps on each of two 10Gbps links between Sao Paulo and Miami."
SC10 will run from Saturday, Nov. 13, until the following Friday, with exhibition dates occurring Monday to Thursday. The technical program will offer many tutorials, panels, workshops and presentations of papers on cutting-edge technology.
For example, one paper from UC-San Diego titled "Understanding the Impact of Emerging Non-Volatile Memories on High-Performance, IO-Intensive Computing," will look at the impact of flash memory and the role of other non-volatile storage types in supercomputing.
"The painfully slow performance of non-volatile storage has been an unfortunate reality for system designers for several decades," the paper states. "Non-volatile, solid-state storage technologies promise to resolve these problems and enable high-performance systems that are faster, cheaper, and more agile than those we build today. Whether they can deliver on this promise remains to be seen, but it will certainly require that we understand the performance potential of these memories, their limitations, and how they will change the balance points within a system. It will also require evaluating the memories in the context of complete systems, since radically altering the cost of IO will reveal or create bottlenecks elsewhere."
Another paper from USC, NASA and Caltech will, as its title suggests, examine "Data Sharing Options for Scientific Workflows on Amazon EC2."
Will cloud computing services such as Amazon EC2 aid the next generation of high-performance computing? That seems to be a distinct possibility, but as this paper notes there will be challenges in ensuring access to data.
"Running a workflow in the cloud involves creating an environment in which tasks have access to the input files they require," the paper states. "There are many existing storage systems that can be deployed in the cloud. These include various network and parallel file systems, object-based storage systems, and databases. One of the advantages of cloud computing and virtualization is that the user has control over what software is deployed, and how it is configured. However, this flexibility also imposes a burden on the user to determine what system software is appropriate for their application."
Follow Jon Brodkin on Twitter: www.twitter.com/jbrodkin