US Navy wants smart robots with morals, ethics

What do robots do in the face of various dilemmas? How do they make decisions, and explain those decisions?

darpa robot
The US Office of Naval Research this week offered a $7.5m grant to university researchers to develop robots with autonomous moral reasoning ability.

While the idea of robots making their own ethical decisions smacks of SkyNet - the  science-fiction artificial intelligence system featured prominently in the Terminator films - the Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications.

+More on Network World: Quick look: Google's self driving car+

The idea behind the ONR-funded project will isolate essential elements of human moral competence through theoretical and empirical research, and will develop formal frameworks for modeling human-level moral logic. Next, it will implement corresponding mechanisms for moral competence in a computational architecture. Once the architecture is established, researchers can begin to evaluate how well machines perform in human-robot interaction experiments where robots face various dilemmas, make decisions and explain their decisions in ways that are acceptable to humans, according to Selmer Bringsjord, professor and department head of the Cognitive Science Department at Rensselaer who along with resechers from Brown, Yale and Georgetown will share the grant.

The US Department of Defense forbids use of lethal, completely autonomous robots. However, researchers say that semi-autonomous robots will not be able to choose and engage particular targets or specific target groups until they are selected by an authorized human operator.

According to ONR cognitive science program director Paul Bello even though today's unmanned systems are 'dumb' in comparison to a human counterpart, progress is being made to incorporate more automation at a faster pace.  "Even if such systems aren't armed, they may still be forced to make moral decisions." Bello also noted in an interview with DefenseOne.com that in a catastrophic scenario, the machine might have to decide who to evacuate or treat first.

In a press release, Bringsjord said that since the scientific community has yet to mathematize and mechanize what constitutes correct moral reasoning and decision-making, the challenge for his team is severe.

In Bringsjord's approach, all robot decisions would automatically go through at least a preliminary, lightning-quick ethical check using simple logics inspired by today's most advanced artificially intelligent and question-answering computers. If that check reveals a need for deep, deliberate moral reasoning, such reasoning is fired inside the robot, using newly invented logics tailor-made for the task. "We're talking about robots designed to be autonomous; hence the main purpose of building them in the first place is that you don't have to tell them what to do," Bringsjord said.

"When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite ruleset created ahead of time by humans can anticipate every possible scenario in the world of war."

For example, consider a robot medic generally responsible for helping wounded American soldiers on the battlefield. On a special assignment, the robo-medic is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured femur. Should it delay the mission in order to assist the soldier?

If the machine stops, a new set of questions arises: The robot assesses the soldier's physical state and determines that unless it applies traction, internal bleeding in the soldier's thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier extreme pain?

Bringsjord and others are preparing to demonstrate some of their initial findings at an Institute of Electrical and Electronics Engineers (IEEE) conference in Chicago in May. They will there be demonstrating two autonomous robots: one that succumbs to the temptation to get revenge, and another - controlled by the moral logic they are engineering - that resists its vengeful "heart" and does no violence.

Follow Michael Cooney on Twitter: nwwlayer8 and on Facebook

Check out these other hot stories:

Federal car fleet to become test bed for high-tech safety gear

"Game of Thrones" author like DOS, hates spellcheck

Quick look: What's hot with 3D printers?

10 crucial issues around controlling orbital debris, directing space traffic

FBI: "Sky was the limit" for this now jailed scammer

7 Crazy computer glitches that made us groan

NASA Space Network facing critical technical, cost challenges

NASA developing unique robotic satellite refueling system

No coach-class here -- Boeing, Bigelow offer look inside future spaceships

FBI/IC3 alert says phishing attacks on phone customers is rising again

From CSO: 7 security mistakes people make with their mobile device
Join the discussion
Be the first to comment on this article. Our Commenting Policies