Can AI outsmart Europe’s bid to regulate it?

|
Jean-Francois Badias/AP/File
The Parliament of the European Union reached an agreement last week with representatives of its 27 member states on the draft text of the Artificial Intelligence Act.
  • Quick Read
  • Deep Read ( 4 Min. )

It was hailed as a landmark moment, and no wonder. For the first time, legislators were taking regulatory steps to address the potential dangers of artificial intelligence.

The announcement came last Friday in the Parliament of the European Union, the world’s largest bloc of free-trading democracies. Hailed as a victory, it also sounded twin wake-up calls.

Why We Wrote This

Artificial intelligence is changing people’s lives at a dizzying pace. Will new European Union regulations designed to make AI “trustworthy and human-centric” work?

First, it brought home how difficult it is proving for governments to place effective guardrails on the dizzyingly rapid expansion of AI. The EU began working on its rules in 2018, and they won’t take full effect until sometime in 2026.

Yet it also homed in on the main reason that task is becoming more urgent: the impact already being felt on the everyday lives, rights, and political autonomy of individual citizens around the globe.

AI companies themselves, arguing that too much regulation might slow the development of AI’s benefits, and geopolitical realities, such as Washington and Beijing’s mutual mistrust, both mitigate the passage of international laws to protect users. The U.S. Congress, for example, seems far from agreeing on whether to legislate any limits.

But the EU law, aiming at “trustworthy, human-centric” use of AI, will set a first benchmark in an uncertain world.

“Landmark” was the headline of choice, and little wonder. After months of discussion and debate among politicians, pundits, and pressure groups worldwide, a group of legislators was finally taking regulatory steps to address the potential dangers of artificial intelligence.

And not just any legislators. Following a series of marathon meetings, the Parliament of the European Union – the world’s largest bloc of free-trading democracies – had reached agreement with representatives of its 27 member states on the draft text of the Artificial Intelligence Act.

Last Friday’s announcement, however, also drew attention for the twin wake-up calls it sounded.

Why We Wrote This

Artificial intelligence is changing people’s lives at a dizzying pace. Will new European Union regulations designed to make AI “trustworthy and human-centric” work?

First, it brought home how difficult it is proving for governments to place effective guardrails on the dizzyingly rapid expansion of AI. The EU began working on its AI strategy in 2018, and the new law won’t take full effect until sometime in 2026.

Yet it also homed in on the main reason that task is becoming more urgent: the impact already being felt on the everyday lives, rights, and political autonomy of individual citizens around the globe.

The EU’s purpose is explicit: ensuring “trustworthy, human-centric” use of AI as ever more powerful computer systems mine, and learn from, ever larger masses of digital data, spawning an ever wider array of applications.

The same technology that may now allow researchers to unlock the mystery of a virus could help create one. Large language models such as ChatGPT not only can produce fast, fluent prose from billions of words on the internet. They can, and indeed do, make mistakes, producing misinformation. And that same huge store of data can be abused in other ways.

One key individual-rights concern for the EU legislators was the prospect that AI could be employed, as is the case in China, to surveil and target citizens or particular groups in Europe.

The new law bans scouring the internet for images to create face-recognition libraries, as well as the use of visual profiling. The police would be exempted, but only under tightly defined circumstances.

Carolyn Thompson/AP/File
A camera with face-recognition capabilities is installed at a New York school in 2018, before such cameras were banned from schools in the state earlier this year.

More broadly, though the exact wording of the law has yet to be published, it will reportedly ensure that people are made aware whether the words and images they’re seeing on their screens have been generated not by humans, but by AI.

Among systems to be banned outright are any “manipulating human behavior to circumvent free will.”

The most powerful “foundation” AI systems – the general-purpose platforms on which developers are building a whole range of applications – will face testing transparency and reporting requirements, obliged to share details of their internal workings with EU regulators.

All of this will be enforced by a new AI regulatory body, with fines for the most serious violations as high as 7% of a company’s global turnover.

Still, the laborious process of producing the AI Act is a reminder of the head winds still facing efforts to place internationally agreed-upon guardrails around a technological revolution whose reach transcends borders.

In the world’s major AI power, the United States, President Joe Biden issued an executive order in October imposing safety tests on developers of the most powerful systems. He also mandated standards for federal agencies purchasing AI applications.

His aim, like the EU’s, was to ensure “safety, security, and trust.”

Yet officials acknowledged that more comprehensive regulation would need an act of Congress, which still seems far from agreeing on how, or even whether, to legislate limits.

One obstacle is the AI companies themselves. Though they acknowledge potential perils, they have argued that there is a risk that overregulation could limit the growth of AI and reduce its benefits. 

And would-be regulators also face geopolitical obstacles, especially the rivalry between the U.S. and China.

One sign has been Washington’s move to limit Chinese access to the latest, specialized computer chips key to building the highest-powered AI systems.

And that touches on a wider national security issue: the growing role of artificial intelligence in weapons systems. Drones have played a major role in Ukraine’s war against Russia’s invasion and in Israel’s attacks on Gaza. The next evolutionary step, military analysts suggest, could be AI-powered “drone swarms” on future battlefields.

The priority of the U.S. is clearly to seek an edge in AI weaponry – at least until there is a realistic hope of bringing China, Russia, and other high-tech military powers into the kind of agreements that, last century, helped limit nuclear weapons.

The EU’s new law does not even cover military applications of AI.

So for now, its main impact will be on the kind of “trust” and “human-centric” issues that European authorities and Mr. Biden both highlighted: letting people know when words or images have been created by AI, and, the lawmakers hope, blocking applications that seek deliberately to manipulate users’ behavior.

Still, that could prove important not just for individuals but also for the societies they live in – the beginning of a fight against the use of AI to “amplify polarization, bias, and misinformation” and thus undermine democracies, as one leading AI expert, Dr. De Kai, recently put it.

The historian Yuval Harari has voiced particular alarm over AI’s increasingly powerful ability to “manipulate and generate language, whether with words, sounds, or images,” noting that language, after all, forms the bedrock of how we humans interact with one another.

“AI’s new mastery of language,” he says, “means it can now hack and manipulate the operating system of civilization.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Can AI outsmart Europe’s bid to regulate it?
Read this article in
https://www.csmonitor.com/World/Europe/2023/1214/Can-AI-outsmart-Europe-s-bid-to-regulate-it
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe