AI in the real world: Tech leaders consider practical issues.

The practical ethics of AI may have less to do with the Terminator, and more to do with terminated workers.

|
Stephen Lam/Reuters/File
Facebook CEO Mark Zuckerberg appears at the Facebook F8 conference in San Francisco, California on April 12, 2016. Zuckerberg and other Silicon Valley leaders have convened on a new panel to consider the practical future of artificial intelligence.

The discussion on artificial intelligence has been flooded with concerns of “singularity” and futuristic robot takeovers. But how will AI impact our lives five years from now, compared to 50?

That’s the focus of Stanford University’s One Hundred Year Study on Artificial Intelligence, or AI100. The study, which is led by a panel of 17 tech leaders, aims to predict the impact of AI on day-to-day life – everywhere from the home to the workplace.

“We felt it was important not to have those single-focus isolated topics, but rather to situate them in the world because that’s where you really see the impact happening,” Barbara Grosz, an AI expert at Harvard and chair of the committee, said in a statement.

Researchers kicked off the study with a report titled “Artificial Intelligence and Life in 2030,” which considers how advances like delivery drones and autonomous vehicles might integrate into American society. The panel – which includes executives from Google, Facebook, Microsoft and IBM – plans to amend the report with updates every five years.

For most people, the report suggests, self-driving cars will be the technology that brings AI to mainstream audiences.

“Autonomous cars are getting close to being ready for public consumption, and we made the point in the report that for many people, autonomous cars will be their first experience with AI,” Peter Stone, a computer scientist at the University of Texas at Austin and co-author of the Stanford report, said in a press release. “The way that is delivered could have a very strong influence on the way the public perceives AI for the years coming.”

Stone and colleagues hope that their study will dispel misconceptions about the fledgling technology. They argue that AI won’t automatically replace human workers – rather, that they will supplement the workforce and create new jobs in tech maintenance. And just because an artificial intelligence can drive your car, doesn’t mean it can walk your dog or fold your laundry.

“I think the biggest misconception, and the one I hope that the report can get through clearly, is that there is not a single artificial intelligence that can just be sprinkled on any application to make it smarter,” Stone said.

The group has also considered regulation. Given the diversity of AI technologies and their wide-ranging applications, panelists argue that a one-size-fits-all policy simply wouldn’t work. Instead, they advocated for increased public and private spending on the industry, and recommended increased AI expertise at all levels of government. The group is also working to create a framework for self-policing.

“We’re not saying that there should be no regulation,” Stone told The New York Times. “We’re saying that there is a right way and a wrong way.”

But there are other issues, some even trickier than regulation, which the study has not yet considered. AI applications in warfare and “singularity” – the notion that artificial intelligences could surpass human intellect and suddenly trigger runaway technological growth – did not fall within the scope of the report, panelists said. Nor did it focus heavily the moral status of artificially intelligent agents themselves.

No matter how “intelligent” they become, AI’s are still based on human-developed algorithms. That means that human biases can be infused in a technology that would otherwise think independently. A number of photo apps and facial recognition softwares, for example, have been found to misidentify nonwhite people.

“If we look at how systems can be discriminatory now, we will be much better placed to design fairer artificial intelligence,” Kate Crawford, a principal researcher at Microsoft and co-chairwoman of a White House symposium on society and AI, wrote in a New York Times op-ed. “But that requires far more accountability from the tech community. Governments and public institutions can do their part as well: As they invest in predictive technologies, they need to commit to fairness and due process.”

As it turns out, there are already groups dedicated to tackling these ethical concerns. LinkedIn founder Reid Hoffman has collaborated with the Massachusetts Institute of Technology Media Lab, both to exploring the socioeconomic effects of AI and to design new tech with society in mind.

“The key thing that I would point out is computer scientists have not been good at interacting with the social scientists and the philosophers,” Joichi Ito, the director of the MIT Media Lab, told The New York Times. “What we want to do is support and reinforce the social scientists who are doing research which will play a role in setting policies.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to AI in the real world: Tech leaders consider practical issues.
Read this article in
https://www.csmonitor.com/Science/2016/0904/AI-in-the-real-world-Tech-leaders-consider-practical-issues
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe