Elon Musk spends $10 million to stop robot uprising

Elon Musk has joined the ranks of public intellectuals who are sounding the alarm about the dangers posed by advanced artificial 'superintelligence.'

|
Rebecca Cook / Reuters
Tesla Motors CEO Elon Musk talks at the Automotive World News Congress at the Renaissance Center in Detroit, Michigan, January 13, 2015.

Yesterday, SpaceX and Telsa motors founder Elon Musk donated $10 million to help save the world – or so he thinks.

Musk’s donation went to the Future of Life Institute (FLI), a “volunteer-run research and outreach organization working to mitigate existential risks facing humanity.” To that end, Musk’s money will be distributed to like-minded researchers around the world. But what exactly are these “existential risks” humanity is supposedly pitted against?

As the memory storage and processing of computers steadily approaches that of the human brain, some predict that an artificial “superintelligence” is just on the horizon. And while the prospect has the scientific community buzzing about the possibilities, some academics are hesitant. Musk and others see artificial intelligence as a dangerous new frontier – and perhaps a threat comparable to nuclear war. Crazy? Maybe not, according to a growing list of prominent scientific thinkers.

"There are seven billion of us on this little spinning ball in space. And we have so much opportunity," MIT professor and FLI founder Max Tegmark told the Atlantic. "We have all the resources in this enormous cosmos. At the same time, we have the technology to wipe ourselves out."

And the FLI isn’t just some social club for rich weirdos. Stephen Hawking and Morgan Freeman are both on the organization’s scientific advisory board, bringing brain power and star power to its support base. Skype creator Jaan Tallinn co-founded the group. The rest of the board is comprised of academics with pedigrees from Harvard, MIT, and Cambridge University.

Oxford University’s Nick Bostrom, who is also on the board, wrote an entire book on the subject of AI takeover: Superintelligence: Paths, Dangers, Strategies. In its preface, he writes:

“In principle, we could build a kind of superintelligence that would protect human values. We would certainly have strong reason to do so. In practice, the control problem – the problem of how to control what the superintelligence would do – looks quite difficult. It also looks like we will only get one chance. Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

In the works of science-fiction writer Isaac Asimov, intelligent machines are bound by “The Three Laws of Robotics,” which forbid them to cause harm to humans. But that wouldn’t necessarily work in the real world, Bostrom writes. He suggests that superintelligences might respond to human requests with perverse instantiation – that is, they could achieve a desired outcome by unintended means. For example, a superintelligence programmed to make us happy would choose the most efficient and effective way of doing so – by implanting electrodes into the pleasure centers of our brains.

As dire as it all sounds, the FLI’s stated goal isn’t to halt the progress of artificial intelligence research. Instead, it hopes to ensure that AI systems remain “robust and beneficial” to human society.

"Building advanced AI is like launching a rocket,” Tallinn stated in a press release. “The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering."

But if superintelligent AI really does pose a threat to mankind, how do we assess that threat? How can humans anticipate the actions of a fundamentally more intelligent machine? Of a being that became sentient not through Darwinian natural selection, but by human ingenuity?

The members of FLI don’t have the answers. They just want the scientific community to start asking the questions, Tegmark says.

"The reason we call it The Future of Life Institute and not the Existential Risk Institute is we want to emphasize the positive," Tegmark told the Atlantic. "We humans spend 99.9999 percent of our attention on short-term things, and a very small amount of our attention on the future."

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Elon Musk spends $10 million to stop robot uprising
Read this article in
https://www.csmonitor.com/Science/2015/0116/Elon-Musk-spends-10-million-to-stop-robot-uprising
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe