How much will you trust your robot?

How much will you trust your robot?
Credit: Reuters/ Yuriko Nakao

Trust and other Human-Robot Interaction issues need to be addressed sooner rather than later, researcher says


Robots will be managed and run by humans, at least to begin with, according an automation expert.

And if you're the one controlling them, it begs questions such as how are you going to get along with these contraptions? It also prompts concerns such as how one stops the machine from misunderstanding, says Thomas B. Sheridan, a professor at the Massachusetts Institute of Technology who studies humans and automation.

Researchers need to become more active in addressing these kinds of questions rather than skimming over potential challenges, says Sheridan. He’s been reading up on the scientific consensuses on the subject and says his peers aren't doing enough research.

[MORE: 'Working robots' at your service.]

“The time is ripe for human factors researchers to contribute scientific insights that can tackle the many challenges of human-robot interaction,” says Sheridan in a paper published in Human Factors: The Journal of the Human Factors and Ergonomics Society.

Human factors is a science related to interactions between people and systems.

Think about what could go wrong

All manner of things could go wrong, and we need to think about that. For example, humans aren’t good at staying alert enough “to take over control of a Google car quickly enough should the automation fail,” Sheridan says in a news release on the society’s website.

Another question that needs to be considered, poses Sheridan, is how do you know what the robot’s reaction will be to a signal from the human operator, say? All humans know that a hand raised during a critical event, such as moving a heavy object, along with a verbal “stop!” command means to abort a maneuver.

Can we be sure a robot will understand a nuance in the heat of the moment? Or even a direct command? The answer, Sheridan says, is to “use real-time virtual reality simulation.” That lets the operator “observe what the spoken commands will cause the robot to do, before giving the ‘go’ signal to the robot,” he says.

Sheridan’s paper also looks at other kinds of problems. Avoidance of cross-purposes is a Human-Robot Interaction (HRI) challenge, says Sheridan. But it can be solved. Obtaining mental models from human operators as to just what is expected from the robot is the way to go, he suggests. Mix that with the robot’s own model using artificial intelligence (AI), and conflict is thus avoided.

Overall, “teaching the robot, and the avoidance of unintended consequences,” is a challenge Sheridon says needs to be addressed. Other than in aviation, the Human Factors discipline hasn't contributed much science, says Sheridan, who has scanned masses of writings and papers as part of his work.

And “trust in robots is a critical component that requires study,” he says.

I wrote about an academic in 2015 who revisited the concept then that robots could become so powerful through AI that they pose a threat to humanity.

“Research in the areas relating to lifestyle, fears and human values is probably the most important challenge for HRI,” Sheridan concludes.

This article is published as part of the IDG Contributor Network. Want to Join?

Must read: Hidden Cause of Slow Internet and how to fix it
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies