Why AI stories are more about humans than about machines

|
20th Century Studios © 2023
Madeleine Yuna Voyles stars as the robot child Alphie in “The Creator,” a film arriving Sept. 29.
  • Quick Read
  • Deep Read ( 6 Min. )

Popular culture has profoundly influenced how we think and talk about artificial intelligence.

Since ChatGPT’s giant leap forward, AI has often been cast as the villain. AI supercomputers go rogue in Gal Gadot’s Netflix thriller “Heart of Stone” and the latest “Mission: Impossible” movie. For dramatic effect, AI is often embodied in sentient – and sometimes murderous – robots. The message: Be kind to your Alexa, or it may set the Roomba on you.

Why We Wrote This

A story focused on

Representations of artificial intelligence in popular culture help push society to think more about technology’s role – and which human values it reflects.

In “The Creator,” opening Sept. 29, a supersoldier named Joshua discovers that the humanoid he has been sent to find in order to avert a war looks like a young Asian child. As Joshua bonds with the robot, he wonders whether machines are really the bad guys.  

The more thoughtful AI stories are really more about humans than about machines. The scenarios about good AI versus evil AI push society to consider ethical frameworks for the technology: How can it represent and embody our best and highest values? 

“Sometimes we are so excited about the technology that we forget why we build the technology,” says Francesca Rossi, president of the Association for the Advancement of Artificial Intelligence. “We want our humanity to progress in the right direction through the use of technology.”  

Humans are at war with machines. In the near future, an artificial intelligence defense system detonates a nuclear warhead in Los Angeles. It deploys a formidable army of robots, some of which resemble people. Yet humans still have a shot at victory. So a supersoldier is dispatched on a mission to find the youth who will one day turn the tide in the war.

No, it’s not another movie in “The Terminator” series. 

In “The Creator,” opening Sept. 29, the hunter is a human named Joshua (John David Washington). He discovers that the humanoid he’s been sent to retrieve looks like a young Asian child (Madeleine Yuna Voyles). It even has a teddy bear. As Joshua bonds with the robot, he wonders whether machines are really the bad guys. 

Why We Wrote This

A story focused on

Representations of artificial intelligence in popular culture help push society to think more about technology’s role – and which human values it reflects.

“All sorts of things start to happen as you start to write that script where you start to think, ‘Are they real? And how would you know?’” writer and director Gareth Edwards told the Monitor during a virtual Q&A session for journalists. “‘What if you didn’t like what they were doing – could you turn them off? What if they didn’t want to be turned off?’”

Popular culture has profoundly influenced how we think and talk about artificial intelligence. Since ChatGPT’s giant leap forward, AI has often been cast as the villain. AI supercomputers go rogue in Gal Gadot’s Netflix thriller “Heart of Stone” and the latest “Mission: Impossible” movie. For dramatic effect, AI is often embodied in robots. They’re not only sentient, but also the killer who’s in the house – quite literally, in the case of “M3GAN,” the murderous high-tech doll. The message: Be kind to your Alexa, or it may set the Roomba on you.

Geoffrey Short/Universal Pictures
In the 2022 horror movie “M3GAN,” a lifelike doll develops a mind of her own.

But the more thoughtful AI stories are really more about humans than about machines. The scenarios about good AI versus evil AI push society to consider ethical frameworks for the technology: How can it represent and embody our best and highest values? 

“Sometimes we are so excited about the technology that we forget why we build the technology,” says Francesca Rossi, president of the Association for the Advancement of Artificial Intelligence (AAAI). “We want our humanity to progress in the right direction through the use of technology.” 

Before ChatGPT was a twinkle in the eye of a search engine, Arthur C. Clarke, Philip K. Dick, and William Gibson were writing about the ethics of AI. Isaac Asimov’s stories posited the Three Laws of Robotics: (1) Robots may not injure humans. (2) Robots must obey human commands, unless they conflict with the first law. (3) A robot must protect its own existence but without conflicting with the first or second laws.

At first, the laws sound good. A closer examination reveals that they’re a literary device with loopholes that the author could exploit for “whodunit” murder mysteries. But in an era in which the Australian military has developed combat AI robodogs – reminiscent of the machine K-9s in the “Black Mirror” episode “Metalhead” – Mr. Asimov’s framing seems freshly relevant.

Courtesy of Jeff Vintar
“The real issue is the ethics of the people behind the robots. Do we want robots that can kill? Apparently we do, because we’re making them right now.” – Jeff Vintar, a screenwriter for the 2004 blockbuster “I, Robot”

“The real issue is the ethics of the people behind the robots,” says Jeff Vintar, a screenwriter for the 2004 blockbuster “I, Robot,” named after a collection of Mr. Asimov’s short stories. “Do we want robots that can kill? Apparently we do, because we’re making them right now.” 

AI and human aims

If AI should be aligned with human goals, the question is, which ones? HAL 9000, the onboard computer in the 1968 film “2001: A Space Odyssey,” illustrates the dilemma of conflicting values. An astronaut returns to the spaceship and asks HAL to open the pod bay doors. The computer refuses. It places a higher priority on the success of the mission than on the life of the astronaut. 

The 2002 movie “Minority Report” is a more earthbound example of competing values. In the story by Mr. Dick, police can predict criminal acts in advance. The result is a tension between safety and privacy. In real life, police are now using AI technology to identify potential future crime by analyzing data about previous arrests, specific locations, and events. Critics claim the algorithms are racially biased.

“This does seem to be coming true, and ‘predictive policing’ doesn’t seem to be so great in that movie,” says Lee Barron, author of “AI and Popular Culture.” “[Mr. Dick] is a particularly prescient writer.”

Perhaps some of the time. The sci-fi author’s book “Do Androids Dream of Electric Sheep?” – later adapted as the movie “Blade Runner” – imagined AI robots that are indistinguishable from humans. But it also predicted that we’d have flying cars by now.

“We’re not good at futurism,” says influential philosopher Fredrik deBoer, who has written about AI for Persuasion, an online magazine, in a Zoom call. “Future forecasting is really hard for us.” 

Mr. deBoer cautions that humankind is prone to overhype the impact of new technologies, for example the Human Genome Project. He wonders if AI will ultimately prove to be less revolutionary than imagined. 

20th Century Fox
Will Smith stars as a Chicago police detective trying to solve a murder with a nonhuman suspect in the 2004 film “I, Robot,” named after a collection of stories by Isaac Asimov.

The arrival of ChatGPT certainly startled and awed the world with its astonishing grasp of the language and communicative abilities. It has amplified debates over whether AI will become sentient – or at least evolve into such a convincing simulacrum of consciousness that we will imagine it to be a living entity with a soul. Could we fall hopelessly in love with sultry-voiced AI entities on our phones, as Joaquin Phoenix does in “Her”?

Pop culture may have conditioned us to fear that AI will destroy humanity if it becomes sentient. That prevalent notion amounts to fearmongering, says Ian Watson, co-writer of the Steven Spielberg movie “A.I. Artificial Intelligence.” Nonbiological machines are heuristic algorithms, the sci-fi author says in a phone interview. It’s possible that self-aware machines may never exist, he adds. In his Pinocchio-like screenplay, which he originally wrote for Stanley Kubrick, a robot boy named David wants to become human. At the end of the movie, David discovers that’s impossible.

Daniel H. Wilson, author of the bestselling 2011 novel “Robopocalypse,” thinks that AI could someday pass the Turing test – that is, appear to think like a human. But he says there hasn’t been the requisite big breakthrough in mathematics and algorithms to make so-called artificial general intelligence possible. By contrast, ChatGPT is known as generative AI. It lacks the ability to understand context. The technology is a predictive algorithm that scrapes the web to calculate the most likely response to queries. He finds that worrisome.

“Generative AI is creating humanlike intelligence by regurgitating billions of data points taken mostly from people on the internet,” says Mr. Wilson, a former robotics engineer. “Can you imagine a worse mirror to hold up to humanity than all of our moments from the internet?”

The role the public plays 

Some computer scientists are working to create healthier AI inputs. A 2023 college textbook titled “Computing and Technology Ethics: Engaging Through Science Fiction” includes reprints of short sci-fi stories that prompt students to contemplate ethical dilemmas in computer programming.

“Once you’re inside a story, thinking from another point of view, issues of motivation [and] issues of social effects are much clearer,” says Judy Goldsmith, a professor of computer science at the University of Kentucky and one of five co-authors of the textbook. The book helps students think beyond the value of utilitarianism, she adds.

Ms. Rossi from AAAI has a copy of that textbook on her desk. Her favorite sci-fi allegory is Pixar’s “WALL-E.” In the 2008 movie, obese humans aboard an intergalactic ship have become wholly beholden to AI. They’ve forfeited meaningful connections with others because they’re constantly staring at screens. 

“‘WALL-E’ is one that really brings up this concept of passively accepting the technology because it makes our life easier,” says Ms. Rossi, who is also the AI ethics global leader at IBM. “In order to keep AI safe and take care of the ethics issues, companies have to do their part. The regulators have to do their part. But every user has to use it responsibly and with awareness.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Why AI stories are more about humans than about machines
Read this article in
https://www.csmonitor.com/The-Culture/Movies/2023/0928/Why-AI-stories-are-more-about-humans-than-about-machines
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe