Robots will have to be flawed if they are to create successful working relationships with humans, new research has found.
"Judgmental mistakes, wrong assumptions, expressing tiredness or boredom, or getting overexcited," will help humans "understand, relate to and interact" with robots more easily, Mriganka Biswas of the University of Lincoln in Britain says in an article on the university's website.
Biswas has been conducting a study for a PhD on how humans interact with robots.
Robots are increasingly being used to support caregivers, the article says.
A person is much more likely to "warm" to a robot if it displays human-like 'cognitive-biases,' the article says. In other words, it can't be too perfect.
When a robot is programmed to be too perfect, it can be "offputting," Sophie Curtis writes in a Telegraph article about the study.
The only problem is that they are indeed perfect. Since the early days of science fiction, robots have been cast as "superior" and "distant," but companion robots, as are used now in healthcare, need to be "friendly, have the ability to recognize users' emotions and needs, and act accordingly," the researchers think.
They can't just follow structured rules and behavior.
"How can we interact with something that is more perfect than we are?" the scientists ask. Empathy is important.
Biswas says imperfections with the robots can help establish empathy among humans.
In fact, human-like faults, such as making mistakes when remembering simple facts or expressing human emotions like extreme happiness and sadness, made the robots more likeable in interactions that the researchers had staged for their study.
The team of researchers found that when they staged interactions between a sampling of participants and robots and asked the participants to rate their experiences, they found that almost all taking part "enjoyed a more meaningful interaction with the robots when they made mistakes."
"The cognitive biases we introduced led to a more humanlike interaction process," the scientists say.
Now, of course, one big question that's been hovering over us humans is just when robots will take over our jobs.
I've written about how robots might become faster and smarter than humans, according to an Oxford University professor, earlier this year in "Robots could wipe out the human race."
In that hypothesis, Dr. Stuart Armstrong says things aren't going to turn out well for humans. He reckons that robots' Artificial General Intelligence comprehension is going to be excessively literal in the future. That's a problem.
He uses the example that robots might interpret a command like "prevent human suffering" as "kill all humans."
Power and speed
And if you add the power and speed that a robot will be able to harness, in comparison to a human, and that they might start making decisions among each other too, humans may be in trouble, he thinks.
Clearly, though, if this research is correct and we build enough imperfect robots that make the same kind of common mistakes as humans, we don't have to worry about that anymore—it'll be a level playing field in the employment stakes.
All we have to do is make sure they work as designed. Easy.
This article is published as part of the IDG Contributor Network. Want to Join?