Posts Tagged “military”

US Navy wants smart robots with morals, ethics

US Navy wants smart robots with morals, ethics.


The US Office of Naval Research this week offered a $7.5m grant to university researchers to develop robots with autonomous moral reasoning ability.

While the idea of robots making their own ethical decisions smacks of SkyNet – the  science-fiction artificial intelligence system featured prominently in the Terminator films – the Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications.

+More on Network World: Quick look: Google’s self driving car+

The idea behind the ONR-funded project will isolate essential elements of human moral competence through theoretical and empirical research, and will develop formal frameworks for modeling human-level moral logic. Next, it will implement corresponding mechanisms for moral competence in a computational architecture. Once the architecture is established, researchers can begin to evaluate how well machines perform in human-robot interaction experiments where robots face various dilemmas, make decisions and explain their decisions in ways that are acceptable to humans, according to Selmer Bringsjord, professor and department head of the Cognitive Science Department at Rensselaer who along with resechers from Brown, Yale and Georgetown will share the grant.

The US Department of Defense forbids use of lethal, completely autonomous robots. However, researchers say that semi-autonomous robots will not be able to choose and engage particular targets or specific target groups until they are selected by an authorized human operator.

According to ONR cognitive science program director Paul Bello even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, progress is being made to incorporate more automation at a faster pace.  “Even if such systems aren’t armed, they may still be forced to make moral decisions.” Bello also noted in an interview with that in a catastrophic scenario, the machine might have to decide who to evacuate or treat first.

In a press release, Bringsjord said that since the scientific community has yet to mathematize and mechanize what constitutes correct moral reasoning and decision-making, the challenge for his team is severe.

In Bringsjord’s approach, all robot decisions would automatically go through at least a preliminary, lightning-quick ethical check using simple logics inspired by today’s most advanced artificially intelligent and question-answering computers. If that check reveals a need for deep, deliberate moral reasoning, such reasoning is fired inside the robot, using newly invented logics tailor-made for the task. “We’re talking about robots designed to be autonomous; hence the main purpose of building them in the first place is that you don’t have to tell them what to do,” Bringsjord said.

“When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite ruleset created ahead of time by humans can anticipate every possible scenario in the world of war.”

For example, consider a robot medic generally responsible for helping wounded American soldiers on the battlefield. On a special assignment, the robo-medic is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured femur. Should it delay the mission in order to assist the soldier?

If the machine stops, a new set of questions arises: The robot assesses the soldier’s physical state and determines that unless it applies traction, internal bleeding in the soldier’s thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier extreme pain?

Bringsjord and others are preparing to demonstrate some of their initial findings at an Institute of Electrical and Electronics Engineers (IEEE) conference in Chicago in May. They will there be demonstrating two autonomous robots: one that succumbs to the temptation to get revenge, and another – controlled by the moral logic they are engineering – that resists its vengeful “heart” and does no violence.

Read more »