ChatGPT CEO to Congress: AI-regulating agency needed ASAP

OpenAI CEO Sam Altman told Congress that government intervention will be critical to mitigating the risks of increasingly powerful AI systems. He said a U.S. or global regulating agency should have the authority to ensure compliance with safety standards.

|
Patrick Semansky/AP
OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on privacy, technology, and the law at a hearing on artificial intelligence, May 16, 2023, in Washington. The CEO supports the regulation of generative artificial intelligence for safety.

The head of the artificial intelligence company that makes ChatGPT told Congress on Tuesday that government intervention will be critical to mitigating the risks of increasingly powerful Artificial Intelligence systems.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman said at a Senate hearing.

Mr. Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. The free chatbot tool answers questions with convincingly human-like responses.

What started out as a panic among educators about ChatGPT’s use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections, and upend some jobs.

And while there’s no immediate sign Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Mr. Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law, opened the hearing with a recorded speech that sounded like the senator but was actually a voice clone trained on Mr. Blumenthal’s floor speeches and reciting ChatGPT-written opening remarks.

The result was impressive, said Mr. Blumenthal, but he added, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or [Russian President] Vladimir Putin’s leadership?”

The overall tone of senators’ questioning was polite Tuesday, a contrast to past congressional hearings in which tech and social media executives faced tough grillings over the industry’s failures to manage data privacy or counter harmful misinformation. In part, that was because both Democrats and Republicans said they were interested in seeking Altman’s expertise on averting problems that haven’t yet occurred.

Mr. Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them, and expressed particular concern about how future AI systems could destabilize the job market. Mr. Altman was largely in agreement, though he had a more optimistic take on the future of work.

Pressed on his own worst fear about AI, Mr. Altman mostly avoided specifics, except to say that the industry could cause “significant harm to the world” and that “if this technology goes wrong, it can go quite wrong.”

But he later proposed that a new regulatory agency should impose safeguards that would block AI models that could “self-replicate and self-exfiltrate into the wild” – hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

That focus on a far-off “science fiction trope” of super-powerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior, and potential for trickery and disinformation, said a former Biden administration official who co-authored its plan for an AI bill of rights.

“It’s the fear of these [super-powerful] systems and our lack of understanding of them that is making everyone have a collective freak-out,” said Suresh Venkatasubramanian, a Brown University computer scientist who was assistant director for science and justice at the White House Office of Science and Technology Policy. “This fear, which is very unfounded, is a distraction from all the concerns we’re dealing with right now.”

OpenAI has expressed those existential concerns since its inception. Co-founded by Mr. Altman in 2015 with backing from tech billionaire Elon Musk, the startup has evolved from a nonprofit research lab with a safety-focused mission into a business. Its other popular AI products include the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Mr. Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

Also testifying were IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI’s latest model, GPT-4, described as more powerful than ChatGPT.

The panel’s ranking Republican, Sen. Josh Hawley of Missouri, said the technology has big implications for elections, jobs, and national security. He said Tuesday’s hearing marked “a critical first step towards understanding what Congress should do.”

A number of tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. Mr. Altman and Mr. Marcus both called for an AI-focused regulator, preferably an international one, with Mr. Altman citing the precedent of the United Nation’s nuclear agency and Mr. Marcus comparing it to the U.S. Food and Drug Administration. But IBM’s Ms. Montgomery instead asked Congress to take a “precision regulation” approach.

“We think that AI should be regulated at the point of risk, essentially,” Ms. Montgomery said, by establishing rules that govern the deployment of specific uses of AI rather than the technology itself.

This story was reported by The Associated Press.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to ChatGPT CEO to Congress: AI-regulating agency needed ASAP
Read this article in
https://www.csmonitor.com/USA/Politics/2023/0517/ChatGPT-CEO-to-Congress-AI-regulating-agency-needed-ASAP
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe