Test for humans: How to make artificial intelligence safe

|
Jean-Francois Badias/AP
Lawmakers vote on the Artificial Intelligence Act on June 14, 2023, at the European Parliament in Strasbourg, France. The act would regulate AI.
  • Quick Read
  • Deep Read ( 5 Min. )

The drumbeat of warnings over the dangers of artificial intelligence is reaching a new level of intensity – even as new AI tools have raised hopes of rising productivity and faster human progress. 

Last month, hundreds of AI researchers and others signed onto a statement suggesting humanity should approach the “risk of extinction” from the technology with the same priority it now gives to nuclear war and pandemics.

Why We Wrote This

A story focused on

As tools based on artificial intelligence spread, calls for regulating the technology are rising. A core question is, can we trust AI – and our own responsibility in using it?

It's not that Terminator-type robots are a near-term risk. But scientists point to the possibility of the technology allowing bad actors to create bioweapons, or being used to disseminate disinformation so effectively that a nation’s social cohesion breaks down. 

Legislators on both sides of the Atlantic are eager to set up guardrails for the burgeoning technology, such as by creating a new regulatory agency.

“We as a society are neglecting all of these risks,” Jacy Reese Anthis, a doctoral student at the University of Chicago and co-founder of the Sentience Institute, writes in an email. “We use training and reinforcement to grow a system that is extremely powerful but still a ‘black box’ to even its designers. That means we can’t reliably align it with our goals, whether that’s the goal of fairness in criminal justice or of not causing extinction.”

The drumbeat of warnings over the dangers of artificial intelligence is reaching a new level of intensity. While AI researchers have long worried that AI could push people out of jobs, manipulate them with fake video, and help hackers steal money and data, some increasingly are warning the technology could take over humanity itself. 

In April, leading tech figures published an open letter urging all AI labs to stop training their most powerful systems for at least six months. Last month, hundreds of AI researchers and others signed onto a statement suggesting humanity should approach the “risk of extinction” at the hands of the technology with the same priority it now gives to nuclear war and pandemics.

“The idea that this stuff will get smarter than us and might actually replace us, I only got worried about a few months ago,” AI pioneer Geoffrey Hinton told CNN’s Fareed Zakaria on June 11. “I assumed the brain is better and that we were just trying to sort of catch up with the brain. I suddenly realized maybe the algorithm we’ve got is actually better than the brain already. And when we scale it up, we’ll get things smarter than us.”

Why We Wrote This

A story focused on

As tools based on artificial intelligence spread, calls for regulating the technology are rising. A core question is, can we trust AI – and our own responsibility in using it?

Mr. Hinton quit his job at Google in May, he says, so he could talk freely about such dangers. 

Noah Berger/AP/File
Geoffrey Hinton, known as the “godfather of artificial intelligence,” poses at Google in Mountain View, California, in 2015. He resigned from Google in 2023 to warn about unchecked AI.

Other scientists pooh-pooh such doomsday talk. The real danger, they say, is not that humanity accidentally builds machines that are too smart, but that it begins to trust computers that aren’t smart enough. Despite the big advances the technology has made and potential benefits it offers, it still makes too many mistakes to trust implicitly, they add. 

Yet the lines between these scenarios are blurry – especially as AI-driven computers grow rapidly more capable without having the moral-reasoning abilities of humans. The common denominator is questions of trust – how much of it do machines deserve? How vulnerable are humans to misplaced trust in machines?

In fact, the systems are so complex that not even the scientists who build them know for sure why they come up with the answers they do, which are often amazing and, sometimes, completely fake.

“It’s practically impossible to actually figure out why it is producing that string of text,” says Derek Leben, a business ethicist at Carnegie Mellon University in Pittsburgh and author of “Ethics for Robots: How To Design a Moral Algorithm.”

“That’s the biggest issue,” says Yilun Du, a Ph.D. student at the Massachusetts Institute of Technology working on intelligent robots. “As a researcher in that area, I know that I definitely cannot trust anything like that. [But] it’s very easy for people to be deceived.” 

Already, examples are piling up of AI systems deceiving people:

  • A lawyer who filed an affidavit citing six bogus court cases, with made-up names like Varghese v. China Southern Airlines, told a New York judge at a sanctions hearing on June 8 that he was duped by the AI system he relied on.
  • A Georgia radio host has sued OpenAI, the company that makes the popular ChatGPT, claiming that the AI system created out of thin air a legal complaint accusing him of embezzlement.
  • Suspicious that his students were using ChatGPT to write their essays, a professor at Texas A&M University-Commerce ran their papers through the same system and gave a zero to those the AI system said it wrote. But the system can’t reliably recognize what it has written. The university intervened, ensuring that none of the students failed the class or were barred from graduation.

These are just hints of the risks in store, AI scientists warn. Throw away the sci-fi visions of Terminator-type robots taking over the world – those are still far-fetched with today’s technology – and the risks of human extinction don’t disappear. Scientists point to the possibility of the technology allowing bad actors to create bioweapons, or boosting the lethality of warfare waged by nation-states. It could also enable unscrupulous political actors to use deepfake images and disinformation so effectively that a nation’s social cohesion – vital to navigating environmental and political challenges – breaks down. 

Elizabeth Frantz/Reuters
OpenAI CEO Sam Altman testifies before a Senate Judiciary subcommittee on Capitol Hill in Washington, May 16, 2023. He urged regulatory oversight of artificial intelligence.

The manipulation of voters and the spreading of disinformation are some of the biggest worries, especially with the approach of next year’s U.S. elections, OpenAI CEO Sam Altman told a Senate panel last month. “Regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”

OpenAI is the creator of ChatGPT, which has fueled much of the AI hype – both positive and negative – ever since its release to the public late last year. It has raised hopes that workers could become much more productive, researchers could make quicker discoveries, and the pace of progress generally would increase. In a survey of CEOs last week, 42% said AI could potentially destroy humanity in 10 or even five years, while 58% said that could never happen and they are “not worried.”

Legislators on both sides of the Atlantic are eager to set up guardrails for the burgeoning technology. The European Union seized the lead last week by agreeing to the draft of an act that would rate AI technologies from “minimal” to “unacceptable” risk. AI deemed unacceptable would be banned and “high risk” applications would be tightly regulated. Many of the leading AI technologies today would likely be considered high or unacceptable risk.

In the United States, the National Institute of Standards and Technology has created an AI risk-management framework. But many in Congress want to go further, especially in light of the perceived failure to regulate social media in a timely manner. 

“A lot of the senators [at last month’s hearing] were explicitly saying, ‘We don’t want to make the same mistakes with AI,’” says Mr. Leben of Carnegie Mellon. They said, “‘We want to be proactive about it,’ which is the right attitude to have.”

How to regulate the industry is still an unknown. Many policymakers are looking for more transparency from the companies about how they build their AI systems, a requirement in the proposed EU law. Another idea being floated is the creation of a regulatory agency that would oversee the companies developing the technology and mitigate the risks.

“We as a society are neglecting all of these risks,” Jacy Reese Anthis, a doctoral student at the University of Chicago and co-founder of the Sentience Institute, writes in an email. “We use training and reinforcement to grow a system that is extremely powerful but still a ‘black box’ to even its designers. That means we can’t reliably align it with our goals, whether that’s the goal of fairness in criminal justice or of not causing extinction.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Test for humans: How to make artificial intelligence safe
Read this article in
https://www.csmonitor.com/Business/2023/0621/Test-for-humans-How-to-make-artificial-intelligence-safe
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe