How computers will recognize gestures and body language

Researchers are exploring more natural ways of communicating with computers.

How computers will recognize gestures and body language

Despite the increasingly common jokes – when Apple's Siri responds to the question, "What is the best phone?" by asking, "There are other phones?" – Speech Interpretation and Recognition Interface technology still isn't very human.

Siri, Google's Now, and Microsoft's Cortana all speak and respond to questions, but they can't replicate the way humans interact with each other in terms of facial expressions, tone of voice, and body language. PCs are even worse.


Scientists are trying to figure out how to change that and make digital interactions more human. One way they want to accomplish this is through gestures.

Computers should be "smart enough to reliably recognize non-verbal cues from humans in the most natural, intuitive way possible," reads an article about a new research project at Colorado State University.

Interfaces limited

Computer user interfaces are limited, and provide "essentially one-way communication: users tell the computer what to do," Bruce Draper, Professor of Computer Science, said in the article.

The university has recently raised $2.1 million in funding from the Defense Advanced Research Projects Agency (DARPA), a part of the U.S. Department of Defense that develops emerging technologies.


DARPA wants to delve more deeply into computer communication, and has a program called "Communicating with Computers" that funds research.

Good idea, but hard to do. Computers aren't human, and they currently aren't particularly intelligent. Also, while they're logical, they're not emotional.


Colorado State University's plan to approach the problem is to record computer users' stimuli when interacting with blocks, pictures, and so on, at a table. It's then going to create a library.

To capture the movement, the project uses an add-on to Microsoft's Kinect gesture interpretation technology.


In the lab, the researchers communicate with the user and capture the gestures they exhibit for such reactions as "stop" or "huh," according to the article.

The scientists then plan to interpret how that reaction plays within the conversation, when it's used and not used, and then create the library. It's calling the library Elementary Composable Ideas (ECIs).

"Like little packets of information recognizable to computers, each ECI contains information about a gesture or facial expression derived from human users, as well as a syntactical element that constrains how the information can be read," the article explains.


Gesture-recognition will be a big area in the future. I've written about how phone-maker ZTE is concentrating on gestures alongside voice, in "Gesture, voice-control are the future of mobile tech, smartphone manufacturer says."

ZTE says consumers are expecting the phone to adapt to them, not the other way around. Consumer expectations are "spiraling," a ZTE executive said in an article published at Mobile Industry Review.

One of ZTE's objectives is removing the endless stream of steps that users have to take to accomplish a task on a smartphone.

More words

In the case of the Colorado State University program, the idea is not to replace words, but to develop ways of communicating with computers intuitively in addition to words, the article explains.

My PC certainly doesn't do anything like that. I've just yawned and rubbed my eyes and the cursor's still blinking expectantly—like a dog ready for a walk. Move mouse to File Close icon, my best option, I think.

This article is published as part of the IDG Contributor Network. Want to Join?

Must read: Hidden Cause of Slow Internet and how to fix it
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies