- Best iPhone, iPad Business Apps for 2014
- 14 Tech Conventions You Should Attend in 2014
- 10 Desktop Apps to Power Your Windows PC
- How to Add New Job Skills Without Going Back to School
Computerworld - BARCELONA -- Software engineers at Intel are exploring new ways people can use the human voice, gestures and head-and-eye movements to operate computers.
Intel's Barry Solomon uses hand gestures in a demonstration of a perceptual computing toolkit being used by independent developers. (Photo by Matt Hamblen/Computerworld)
In coming years, their research is expected to help independent developers build computer games, doctors control computers used in surgery and firefighters when they enter flaming buildings.
"We don't really know what this work will become, but it's going to be fascinating to watch it play out," said Craig Hurst, Intel's director of visual computing product management, in an interview at Mobile World Congress. "So far, what we've seen has gone beyond what we thought of originally."
Intel's visual computing unit, created two years ago, has grown to become a top priority for the chip maker, Hurst said. Last fall, the unit released several software toolkits that are used by independent developers to create a raft of new and sometimes unusual applications.
One of the toolkits, called the Perceptual Computing SDK (software developer kit), was distributed to outside developers building applications that will judged by Intel engineers. Intel is planning to award $1 million in prizes to developers in 2013 for the most original application prototype designs, not only in gaming design, but also in work productivity and other areas.
Barry Solomon, a member of the visual computing product group, demonstrated how the Intel software is being used by developers on Windows 7 and Windows 8 desktops and laptops. With a special depth-perception camera clipped to the top of his laptop lid and connected over USB to the computer, Solomon was able to show how the SDK software rendered his facial expressions and hand gestures on the computer screen, accompanied by an overlay of lines and dots to show the precise position of his eyes and fingers. A full mesh model can then be rendered.
With that tracking information easily available, a developer can quickly insert a person's face and hands into an augmented reality scenario. Or, the person can be quickly overlaid onto a green screen commonly seen in video applications to make a weather or news report. The person's gestures could be used by a developer to interact with functions in a game or productivity application.
A company called Touchcast is building a green-screen application that will be available later in 2013. The prototype camera, called the Creative Interact Gesture camera, which Intel uses in its perpetual computing demonstrations with the SDK, will also be for sale later this year.
Hurst said Intel's role in building the SDK's for developers is to "reduce the barriers" to making creative new applications. Voice software company Nuance worked with Intel on the speech recognition capabilities, while SoftKinetic provided depth recognition software for the camera and augmented reality software.
Originally published on www.computerworld.com. Click here to read the original story.