- Google I/O 2013's Coolest Products and Services
- 10 Star Trek Technologies That are Almost Here
- 19 Generations of Computer Programmers
- 25 Must-Have Technologies for SMBs
IDG News Service - With chip makers continuing to increase the number of cores they include on each new generation of their processors, perhaps it's time to rethink the basic architecture of today's operating systems, suggested Dave Probert, a kernel architect within the Windows core operating systems division at Microsoft.
The current approach to harnessing the power of multicore processors is complicated and not entirely successful, he argued. The key may not be in throwing more energy into refining techniques such as parallel programming, but rather rethinking the basic abstractions that make up the operating systems model.
Today's computers don't get enough performance out of their multicore chips, Probert said. "Why should you ever, with all this parallel hardware, ever be waiting for your computer?" he asked.
Probert made his presentation on Wednesday at the University of Illinois at Urbana-Champaign's Universal Parallel Computing Research Center.
Probert is on the team working on the next generation of Windows, though he said the ideas in this talk did not represent any actual work his team is doing for Microsoft. In fact, he noted that many of the other architects on the Windows kernel development team don't even agree with his views.
For the talk, he set out to define what a new operating system, if designed by scratch, would look like today. He concluded it would be quite different from Windows or Unix.
Today's typical desktop computer runs multiple programs at once, playing music while the user writes an e-mail and surfs the Web, for instance.
"Responsiveness really is king," he said. "This is what people want."
The problem in being responsive, he noted, is "how does the OS know [which task] is the important thing?" You don't want to wait for Microsoft Word to get started because the antivirus program chose that moment to start scanning all your files.
Most OSes have some priority scheduling to avoid these bottlenecks, but they are still crude. (Probert even suggested telemetry, in the form of a "This Sucks!" button on each computer, that a user can push whenever he or she gets frustrated with the computer's pokiness. The resulting data could be compiled to give OS developers a better idea of scheduling.)
As they began adding multiple processor cores, chip makers took a "Field of Dreams" approach to multicore chips: Build them and hope the application programmers would write programs for them. The problem is today's desktop programs don't use the multiple cores efficiently enough, Probert said.
To get the full benefit from multiple cores, developers need to use parallel programming techniques. It remains a difficult discipline to master and hasn't been used much, outside of specialized scientific programs such as climate simulators.
Perhaps a better way to deal with multiple cores is to rethink the way operating systems handle these processors, Probert said. "Really, the question is, not how do we do parallel, but what do we do with all these transistors?"