People lie to some kinds of robots

People lie to some kinds of robots
Credit: Martyn Williams

People like robots that have human-mimicking emotions and expressive faces, but they're concerned they might hurt the robots' feelings—so they lie to them

Humans like robots that communicate well and express themselves more than bots that simply complete tasks efficiently, or even successfully, say researchers.

Academics have been trying to find out the extent. One problem experiments have revealed is that human operators, in one case, wanted to lie to a sample robot to avoid hurting its programmed feelings.

+ Also on Network World: How much will you trust your robot? +

In that case, a humanoid-style robot attempting to make an egg dish—but failing and then apologizing to the human with an expressive, sad face—was liked better in experiments than less klutzy robots that make perfect omelettes.

“Making an assistive robot partner expressive and communicative is likely to make it more satisfying to work with and lead to users trusting it more,” University College London (UCL) says in a press release about the study. And that’s “even if it makes mistakes.”

The team, made up of UCL and University of Bristol academics, got the expressive, but somewhat useless, egg-smashing robot to ask the study participants later whether they’d offer it an assistant job in the kitchen.

The humans could only respond "yes" or "no" and weren’t allowed to proffer any explanation of their decision.

The research found that the humans were generally reluctant to answer, and they took the question to heart. One person thought the robot looked “sad” when the individual had declined the robot employment. The robot hadn’t been programmed to change its facial expression during the employment-request answer.

“Another complained of emotional blackmail, and a third went as far as to lie to the robot,” the release says.

That’s a problem. You shouldn’t have to lie to your robot.

Trust will be an important element in the human-robot interaction, and one way to create that is for the robot to apologize for an error, just like humans do with each other to garner the same confidence. It also “softens the displeasure,” the study says (PDF).

Amusingly, in the omelette-making task, the expressive robot took 50 percent longer to do the job than a non-expressive one. That was ironically caused by the communications.

More work needed in human-robot interaction

Human-robot interaction needs more work, many academics say.

Misunderstandings of nuances and cross-purposes is a potential issue, says Thomas B. Sheridan, a professor at Massachusetts Institute of Technology who studies humans and automation. Unrelated to the egg study, he says there could be a slew of problems that arise as we become dependent on robots. “Obtaining mental models from human operators as to just what is expected from the robot” is an approach he has suggested.

More researchers say robots’ “judgmental mistakes, wrong assumptions, expressions of tiredness or boredom, or getting overexcited” will help humans “understand, relate to and interact” with robots better. I wrote about a University of Lincoln study that suggested the structure and perfection of robots was unsettling.

It can’t be too perfect, the studies say. And indeed, if that’s how they end up—flawed—it might level the playing field a bit when they come for our jobs, too.

This article is published as part of the IDG Contributor Network. Want to Join?

Must read: Hidden Cause of Slow Internet and how to fix it
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies