Google builds neural network, makes it hallucinate

Using image-recognition software and a feedback loop Google researchers have produced art both beautiful and strange.

|
Courtesy of Google and MIT Computer Science and Al Laboratory
Neural net 'dreams'— generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory.

What if if you could see the way Google thinks? Well, thanks to images just released, it is now possible to know the inner workings of the search giant's artificial neural networks. It turns out that Google's mind – if you can call it that – produces images that are both eerie and beautiful, sometimes both at the same time.  

Google researchers Alexander Mordvintsev, Christopher Olah, and Mike Tyka ran an experiment in which they used image recognition software not to identify images that it was familiar with, but to create them.

After "teaching" an artificial neural network to recognize certain objects, animals, and buildings, the researchers then threw the system a loop, literally. They gave the system an image that didn't have any of these things, and tasked it with identifying any feature that it recognized, and then to alter the image to emphasize that feature. Then they took the altered image and fed it back into the system to do it over and over.

"If a cloud looks a little bit like a bird," the researchers wrote, "the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere."

The result are images that are distorted beyond imagination. Well, human imagination that is: In the images, peacocks appear out of water, doglike animals float atop buildings, and buildings emerge out of mountains.

Courtesy of GoogleResearch
Image of a man on a horse is run though a filter taught to search for and emphasize animal-like shapes.

The level of distortion depended on the level which the software was commanded to repeat the search on its own output. The greater the number of times, the more distorted the impression. The researchers also found that the artificial intelligence network could create a photo out of “noise,” using networks trained by MIT Computer Science and Al Laboratory

The researchers began this project in order to better understand what occurred at each layer of the neural network. By feeding a photo into the system, and then asking the system to analyze it within the feedback loop, the team was able to invert the problem in order to understand the artificial thought process. The example that the research team presents in their blog is how to teach the neural network to identify a "banana." The team would begin with an image consisting only of "random noise." Then they would subsequently change the photo of noise in tiny increments, in order to create a photo that the neural network would identify as a "banana." 

Anyone who has ever seen the man on the moon or a face in a piece of toast might be able to empathize with Google's network. Known as pareidolia, the tendency to to generate familiar images and sounds out of random stimuli is universal among humans, and lies behind everything from emoticons to constellations. 

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Google builds neural network, makes it hallucinate
Read this article in
https://www.csmonitor.com/Technology/2015/0619/Google-builds-neural-network-makes-it-hallucinate
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe