Washington rushing to put guardrails on AI – fast enough?

|
J. Scott Applewhite/AP
Alexandr Wang (left), CEO of Scale AI, speaks with Klon Kitchen of the American Enterprise Institute before they testify to a House Armed Services subcommittee about using artificial intelligence on the modern battlefield, at the Capitol in Washington, July 18, 2023.
  • Quick Read
  • Deep Read ( 6 Min. )

Washington is increasingly heeding warnings that the emerging powers of artificial intelligence are so consequential that they require an entirely new governance regime – similar to that instituted in response to nuclear weapons.

With AI advancing at an exponential pace, this represents perhaps the fastest-moving scientific challenge that the slow-moving creature of Washington has ever grappled with. 

Why We Wrote This

With artificial intelligence advancing at lightning speed, many experts, and increasingly policymakers, say that Washington needs to move faster than usual on regulation and oversight.

While AI could be tremendously helpful in a number of areas when constructively harnessed, the White House and Congress have taken initial steps to develop guardrails. 

One key idea that has gained currency is creating a regulatory agency to oversee the fast-growing field and ensure that the object of protecting humanity is not mixed with the object of making money, as it would be in a private company. Many would also like to hold AI developers liable if their systems are used for nefarious purposes. There’s also a push to require that AI-generated content, such as political advertisements, be clearly identified as such. 

A Senate hearing last week underscored the seriousness with which both parties are approaching the issue, putting aside partisan sniping.

“What you see here is not all that common, which is bipartisan unanimity,” said Democratic Sen. Richard Blumenthal during the event.

Computer science professor Stuart Russell had been thinking about the massive potential benefits as well as the risks of artificial intelligence long before AI became a buzzy acronym with the rise of the ChatGPT app this year. 

“It’s as if an alien civilization warned us by email of its impending arrival, and we replied, ‘Humanity is currently out of the office,’” said Professor Russell of the University of California, Berkeley at a congressional hearing last week. But he gave a nod to the growing awareness among the public as well as policymakers in Washington that this emerging technology requires oversight. “Fortunately, humanity is now back in the office and has read the email.” 

Of course, it’s a long jump from registering the warning to preparing for the arrival of a potent new force, but waking up to its risks is an important first step. And over the past year, Washington has made initial efforts to size up the challenge and strategize about how to install some guardrails – before AI races past them. However, this represents perhaps the fastest-moving scientific challenge the slow-moving creature of Washington has ever grappled with, requiring it to streamline its typically bureaucratic approach to problem-solving. 

Why We Wrote This

With artificial intelligence advancing at lightning speed, many experts, and increasingly policymakers, say that Washington needs to move faster than usual on regulation and oversight.

“We don’t have a lot of time,” CEO Dario Amodei of Anthropic, a San Francisco-based firm that aims to create “reliable, beneficial” AI systems, told senators last week. “Whatever we do, we have to do it fast.”

The reason for urgency? Experts say that, with AI capable of making advances at an exponential pace, efforts to control how it is used – or to avoid unintended harm to society – may only get harder over time.

As AI-related discussions have been happening around Washington over the past year, several key ideas have gained currency: (1) creating a regulatory agency to oversee the fast-growing field and ensure that human interests are not mixed with profits as they would be in a private company, (2) establishing liability so that AI developers know they will be held responsible if their systems are used for nefarious ends, and (3) requiring transparency in AI models and clear identification of AI-generated materials, such as by a watermark or a red frame around a political ad.  

An active Congress and White House

Over the past several months, more than 20 bills have been introduced in the House and Senate dealing with various aspects of AI – from requiring the government to conduct risk assessments and develop a public health preparedness strategy, to establishing a bipartisan national commission to make recommendations.

Last week’s hearing was a sign that the lawmakers are trying to move forward, with stepped-up efforts to educate themselves as they consider potential measures. 

The Biden administration, for its part, recently welcomed seven AI companies to announce a pact committing themselves to internal and external testing of their models prior to publicly releasing them. Amazon, Google, Meta, Microsoft, and OpenAI (the creator of ChatGPT), as well as Mr. Amodei’s company, Anthropic, and another startup, Inflection, were part of the voluntary agreement.  

The pact, which focused on building safety, security, and trust, was hailed as an important milestone, following on the White House’s Blueprint for an AI Bill of Rights last fall and an AI risk management framework released in January by the Commerce Department’s National Institute of Standards and Technology. 

However, some said the new pact was too vague and lagged efforts elsewhere around the world, from the European Union to China. Experts say that a more robust framework is needed, with the teeth to enforce it – ideally by a new federal agency that would be able to respond and adapt to emerging challenges more quickly than, say, Congress. 

Manuel Balce Ceneta/AP
President Joe Biden (far left) arrives to speak about a pact with corporations on voluntary safety practices for artificial intelligence, at the White House, July 21, 2023. Representing the firms were (from left) Adam Selipsky of Amazon Web Services, Greg Brockman of OpenAI, Nick Clegg of Meta, Mustafa Suleyman of Inflection AI, Dario Amodei of Anthropic, and Kent Walker of Alphabet.

While many see the potential for AI to greatly benefit humanity, the technology raises grave concerns about everything from data privacy and election integrity to autonomous weapons and new biological threats. It could also reinforce or exacerbate societal inequities – a concern already raised, for example, regarding the consequences of facial recognition systems being less accurate among people of color.

Tristan Harris and Aza Raskin, who were featured in “The Social Dilemma” documentary warning of the dangers of social media, have described the latest chapter of AI development as a threat to humanity on par with the evolution of nuclear weapons – but worse. 

“Nukes don’t make stronger nukes, but AI makes stronger AI,” said Mr. Raskin in a March talk the pair gave to more than 100 leaders in fields ranging from finance to government.

That means that as AI learns more, it can apply the gains across different fields, added Mr. Harris, a former design ethicist at Google. “It’s like an arms race to strengthen every other arms race,” he said, urging leaders to realize the responsibility they have to institute new systems for containing the new technology, just as the world did in its attempt to rein in nuclear proliferation and avert nuclear war.

It’s not just them. Recently, more than 250 AI experts, including Professor Russell, signed a statement saying that mitigating the risk of extinction from AI should be as high a priority as addressing the risk of pandemics and nuclear weapons.

Though there are many potentially constructive uses of AI, it is likely to be a disruptive force in society, even without any Hollywood-style plots about sentient machines intentionally attacking humans.

Bipartisan cooperation

So far, there appears to be bipartisan consensus to act on AI, removing one of the major obstacles to Washington policymaking. 

Senate Majority Leader Chuck Schumer, a New York Democrat, described the Chinese Communist Party’s release this spring of its approach to regulating AI as “a wake-up call” to the United States. For months, he has been discussing and refining ideas for how America could take the global lead on AI innovation and shape the rules of the road. 

“We must approach AI with the urgency and humility it deserves,” said Mr. Schumer in a statement outlining his SAFE Innovation Framework, noting that he was encouraging bipartisan policy work and legislation across numerous committees.

J. Scott Applewhite/AP/File
Senate Majority Leader Chuck Schumer of New York arrives to speak with reporters at the Capitol in Washington, May 3, 2023. Mr. Schumer has released a general framework of what regulation of artificial intelligence could look like.

As part of that, he is convening a series of nine bipartisan forums for all senators to get up to speed on various aspects of AI, starting with three this summer on AI’s current capabilities, future frontier, and U.S. defense and intelligence capabilities in the field vis-à-vis America’s adversaries.

Similarly, Speaker of the House Kevin McCarthy has said that all members of the House Intelligence Committee would take AI courses resembling those provided to military generals. And the congressional hearing last week, which featured AI “godfather” Yoshua Bengio of the University of Montreal as well as Professor Russell and Mr. Amodei, was marked by an unusual level of bipartisan comity. 

“What you see here is not all that common, which is bipartisan unanimity,” said Chair Richard Blumenthal, a Democrat and former attorney general of Connecticut, who oversaw a serious, substantive discussion for more than two and a half hours before a packed hearing room. 

“There has to be a cop on the beat,” he said. “That cop on the beat, in the AI context, has to be not only enforcing rules but also, as I said at the very beginning, incentivizing innovation – and sometimes funding it – to provide the air bags and seat belts and the crash-proof kinds of safety measures that we have in the automobile industry.”

Lessons from social media policy

Another shared view across Washington is that it needs to apply the lessons learned from not dealing sooner and more decisively with social media platforms. 

“We can’t repeat the mistakes we made on social media, which was to delay and disregard the dangers,” said Mr. Blumenthal. The Connecticut senator, along with the top Republican on the subcommittee, Sen. Josh Hawley of Missouri, has co-sponsored legislation to prevent AI companies from claiming immunity for third-party content – as social media companies have done – under a telecommunications law known as Section 230.  

Senator Hawley, a longtime critic of Big Tech, commended his Democratic counterpart for putting together such substantive hearings but expressed skepticism given legislators’ inability to rein in the sector’s social media regime – and the fact that many of the key players are the same now, including Google, which owns YouTube, and Meta, which owns Facebook. 

“Will Congress actually do anything?” asked Senator Hawley, who said his priorities were protecting workers, kids, and consumers – as well as national security. “We’ve had a lot of talk, but now is the time for action.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Washington rushing to put guardrails on AI – fast enough?
Read this article in
https://www.csmonitor.com/USA/Politics/2023/0731/Washington-rushing-to-put-guardrails-on-AI-fast-enough
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe