New role for robot warriors

Drones are just part of a bid to automate combat. Can virtual ethics make machines decisionmakers?

|
Tony Avelar/The Christian Science Monitor/File
Airmen roll out a Predator unmanned aircraft in Indian Springs, Nev. Such aircraft are tightly controlled by remote human operators. Some artificial-intelligence proponents believe next-generation robots could function more autonomously.
|
Jonathan Nackstrand/AFP/Newscom
A picture taken on October 12, 2009 during the Israel Defense exhibition at the Ganei Ha-tarucha Fair Center in Tel Aviv shows a new Unmanned Aircraft System.

Science fiction sometimes depicts robot soldiers as killing machines without conscience or remorse. But at least one robotics expert today says that someday machines may make the best and most humane decisions on the battlefield.

Guided by virtual emotions, robots could not only make better decisions about their own actions but also act as ethical advisers to human soldiers or even as observers who report back on the battlefield conduct of humans and whether they followed international law.

As militaries around the world invest billions in robotic weapons, no fundamental barriers lie ahead to building machines that "can outperform human soldiers in the battlefield from an ethical perspective," says Ronald Arkin, associate dean at the School of Interactive Computing at Georgia Institute of Technology in Atlanta. The result would be a reduction in casualties both for soldiers and civilians, he says.

Dr. Arkin has begun work on an ethical system for robots based on the concept of "guilt." As a robot makes decisions, such as whether to fire its weapons and what type of weapon to use, it would constantly assess the results and learn. If the robot established that its weapons caused unnecessary damage or casualties, it would scale back its use of weapons in a future encounter. If the robot repeatedly used excessive force, it would shut down its weapons altogether – though it could continue to perform its other duties such as reconnaissance.

"That's what guilt does in people, too, at least in principle," Arkin says. "Guilty people change their behavior in response to their actions."

Though "Terminator"-style warriors will likely remain fictional long into the future, thousands of military robots are already operating on land, sea, and in the air, many of them capable of firing lethal weapons. They include missile-firing Predator and Reaper aircraft used by the American military in Iraq and Afghanistan, remotely controlled by human soldiers. Naval ships from several nations employ Phalanx gun systems (sometimes called "R2D2s" on American ships, referring to the robot from "Star Wars"), capable of shooting down incoming planes or missiles without command or targeting from a human.

South Korea has deployed armed robotic systems along its demilitarized zone with North Korea. The Israeli army patrols its borders with Gaza and Lebanon with roving unmanned ground vehicles.

Among systems being developed for the future by the US military are the Vulture, a pilotless helicopter that could stay aloft for up to 20 hours, and an unmanned ground combat vehicle.

But Arkin's sunny forecast for the future of ethical robot warriors has met with deep skepticism among some in his field.

New laws will be needed to constrain the use of autonomous (without a human operator) robot weapons on the battlefield, argues Noel Sharkey, a professor of artificial intelligence and robotics at Britain's University of Sheffield, in an essay last year. He sees two huge ethical hurdles in the way: One is the unproven ability of robotic weapons to discriminate between friend and foe, avoiding hitting civilians and other noncombatants. The other questions a computer's ability to judge "proportionate" force, enough to gain military advantage while minimizing civilian casualties.

Both these concepts are seen as essential to waging ethical warfare.

"Humans understand one another in a way that machines cannot," Dr. Sharkey writes. "Cues can be very subtle and there are an infinite number of circumstances where lethal force is inappropriate." International talks on the use of autonomous robots in warfare "are needed urgently," he says.

What experts do agree on is that today's robots fall far short of the kind of artificial intelligence (AI) they need to operate effectively on their own, let alone act ethically. "When it comes to high-stakes decisions that hinge on understanding subtle human preferences and intentions, we still have a bunch of research to do at the level of basic science to give AI systems the same common sense that people have," says Eric Horvitz, principal researcher at Microsoft Research and immediate past president of the Association for the Advancement of Artificial Intelligence. The AAAI plans to issue a report later this year on the challenges and opportunities presented by the growing relationship between humans and AI.

Arkin's work is of great interest to Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University in San Luis Obispo. His group is working on software for military robots that would include ethical behavior.

"We're sending machines to do our dirty work," says Dr. Lin, who is also an ethics fellow at the US Naval Academy and is working on a book on robotic ethics. The military should adopt a slow "crawl, walk, run philosophy" to introduce autonomous robots to the battlefield, he says.

Many kinds of limits could be put on military robots, Lin says. For example, their use could be confined to a "kill box," an area known to contain only enemy troops. They could be authorized to fire only on opposing robots or other nonhuman targets. Or they might carry only nonlethal weapons, such as water cannons or devices that only stun.

Right now, Arkin argues, the discussion about the use of robots in war is in its infancy. It reminds him of the attitudes people held before the Wright brothers flew the first airplane. At that time, some wondered if humans were intended to fly at all or whether flying could ever become a safe thing to do.

Robots don't need to have a perfect ethical record to win a place on the battlefield, Arkin argues. They only have to behave better than humans, who have a deeply flawed ethical record in warfare.

"This is not about robot armies out to kill us," Arkin says. "This is actually about reducing noncombatant fatalities."

You've read  of  free articles. Subscribe to continue.
QR Code to New role for robot warriors
Read this article in
https://www.csmonitor.com/Technology/Tech/2010/0217/New-role-for-robot-warriors
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe