Pentagon is worrying about 'Terminator' coming true. Seriously.

Weapons that can be programmed to autonomously kill without human input are 'a decade or so away,' a top Pentagon official says. What to do about that is a question causing deep disagreement. 

|
Jordan Strauss/Invision/AP/File
This isn't quite what the Pentagon is picturing, but the debate over the development and deployment of autonomous weapons systems is gaining urgency.

The idea behind the Terminator films – specifically, that a Skynet-style military network becomes self-aware, sees humans as the enemy, and attacks – isn’t too far-fetched, one of the nation’s top military officers said this week.

Nor is that kind of autonomy the stuff of the distant future. “We’re a decade or so away from that capability,” said Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff.

With such a sci-fi prospect looming, top military thinkers and ethicists are beginning to consider the practical consequences. But the more they do, the more it’s clear that there is considerable disagreement about just how much freedom to give machines to make their own decisions.

“We have to be very careful that we don’t design [autonomous] systems in a way that we can create a situation where those systems actually absolve humans of the decision” about whether or not to use force, General Selva said. “We could get dangerously close to that line, and we owe it to ourselves and to the people we serve to keep that a very bright line.”

At the same time, “The notion of a completely robotic system that can make a decision about whether or not to inflict harm on an adversary is here,” he added in remarks at the Center for Strategic and International Studies Monday. “It’s not terribly refined, not terribly good. But it’s here.”

This leaves top Pentagon officials to confront what they call the “Terminator conundrum,” and how to handle it.

The 'Russia' question 

The argument begins with an assertion: Adversaries such as Russia and China are going to build these fast-moving, fully-autonomous killing systems, so perhaps the Pentagon should design them, too – not to use them, top officials are quick to add, but to know how they work and how to counter them.

After all, they say, policymakers may need options, and it’s the job of the Pentagon to give them these options.

To this end, in a highly-anticipated report released in August, the Pentagon’s Defense Science Board urged military researchers to “accelerate its exploitation of autonomy” in order to allow them to “remain ahead of adversaries who also will exploit its operational benefits.”

This leaves opponents of autonomous drone systems wary. Many nongovernmental organizations have called for bans on developing killing machines that leave humans out of the loop.

But what is “meaningful human control”?

That concept hasn’t been well-defined, says Paul Scharre, director of the 20YY Future of Warfare Initiative at the Center for a New American Security.

What is clear is that humans can’t be involved solely in a “push button way,” Mr. Scharre says. Rather, the humans supervising these systems must be “cognitively engaged.”

Scharre points to two fratricides in 2003 caused when malfunctioning Patriot missile systems shot down a US F-16 fighter jet and a British Tornado over Iraq.

“One of the problems with the fratricides was that people weren’t exercising judgment. They were trusting in an automated system, and people weren’t monitoring it.” 

Humans are slow 

But the Pentagon’s debates get murkier from there. If an adversary were to develop an effective fully-automated system, it would likely react much faster than a US system that requires human checks and balances. In that scenario, the human checks could cost lives.

Frank Kendall, the undersecretary of Defense for Acquisition, Technology, and Logistics – essentially, the Pentagon’s top weapons buyer – has signaled that he differs from Selva. If people always have oversight of autonomous weapons, he says, that could put the US at a disadvantage.

“Even in a more conventional conflict, we’re quite careful about not killing innocent civilians,” Mr. Kendall noted at the Army Innovation Summit last month. “I don’t expect our adversaries to all behave that way, and the advantage you have if you don’t worry about that as much is you make decisions more quickly.”

After all, many weapons systems could be easily and effectively automated, including tanks that could sense incoming rounds and take out the source of them, he argued.

“It would take nothing to automate firing back, nothing,” Kendall told the audience according to the online publication Breaking Defense. “Others are going to do it. They are not going to be as constrained as we are, and we’re going to have a fundamental disadvantage if we don’t.”

But global bans on autonomous weapons are also problematic, Selva said. “It’s likely there will be violators.”

“In spite of the fact that we don’t approve of chemical or biological weapons, we know there are entities, both state and nonstates, that continue to pursue that capability,” he said.

Moreover, he noted that these questions put him – as a military man – in a difficult position. “My job as a military leader is to witness unspeakable violence on an enemy.... Our job is to defeat the enemy.”

The astonishing 'Go' experiment 

Removing humans from that decision to inflict violence could prove highly unpredictable in good and bad ways. That was brought out with dramatic effect at an event earlier this year pitting a Google-developed DeepMind machine against a top-level player of the complex game “Go.”

The machine, which humans had trained to learn, made a move that astonished Go commentators. “The move that the computer made was so unexpected and counterintuitive that it blew commentators away,” notes Scharre. “At first they thought it was a fake, and then it set in, the brilliance of the move.”

The point, Scharre says, is that “no matter how much testing is done, we’ll always see surprises when machines are placed into real-world environments.”

It’s particularly true in competitive environments – epitomized by war – where adversaries will try to hack, trick, or manipulate the system, he adds.

“Sometimes the surprises are good,” Scharre says, such as the Go move that was calculated by experts to be a “brilliant, beautiful” move that only 1 in 10,000 humans would have made.

In other cases, the surprises are unwelcome, highlighting the unexpected and tragic flaws that led to the Patriot air defense fratricides.

“Better testing and evaluation is good, but it can only take us so far. At a certain point, we will either have to decide to keep a human in the loop, even if only as a fail-safe,” he adds, “Or we will have to accept the risks that come with deploying these systems.”

Weighing the benefits  

“In some cases, the benefits of autonomy may outweigh the risks,” Scharre says.

Speed of decisionmaking is one example. Innovations such as self-driving cars also mark a “tremendous opportunity in the coming years to save tens of thousands of lives,” Scharre says.

The question, experts add, is whether these potential lives saved will outweigh potential – though likely fewer – lives lost if the automatic systems go awry.

For his part, Kendall imagines much bigger changes in weapons automation.

“We still send human beings carrying rifles down trails to find the enemy. We still do that. Why?” he wondered aloud. “I don’t think we have to do that anymore, but it is an enormous change of mind-set.”

“Autonomy is coming,” he added. “It’s coming at an exponential rate.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Pentagon is worrying about 'Terminator' coming true. Seriously.
Read this article in
https://www.csmonitor.com/USA/Military/2016/0903/Pentagon-is-worrying-about-Terminator-coming-true.-Seriously.
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe