What if your laptop knew how you felt?

Researchers train computers to 'read' emotions, which could help with teaching, security, people with autism – and cranky users.

Not even Dan Brown and his Da Vinci codebreakers dared tackle the mystery of Mona Lisa's smile. But Nicu Sebe, a computer vision researcher at the University of Amsterdam, Netherlands, did. He processed the enigmatic portrait with his "emotion recognition" software and – eureka! – Mona Lisa was happy (83 percent) and slightly disgusted (9 percent).

Mr. Sebe valiantly pursued other mysteries. He decoded the image of Che Guevara that adorns T-shirts worldwide and proclaimed that El Comandante was mostly sad. And the fellow in Edward Munch's "The Scream"? He's much less frightened than surprised, Sebe declares.

Faces reveal emotions, and researchers in fields as disparate as psychology, computer science, and engineering are joining forces under the umbrella of "affective computing" to teach machines to read expressions. If they succeed, your computer may one day "read" your mood and play along. Machines equipped with emotional skills could also be used in teaching, robotics, gaming, sales, security, law enforcement, and psychological diagnosis.

Sebe doesn't actually spend research time analyzing famous images – that's just for fun. And calling Mona Lisa "happy" is not accurate science, but saying she displays a mixture of emotions is, Sebe says. Why? Because to accurately read an emotional state, a computer needs to analyze changes in expression against a neutral face, which Da Vinci did not provide.

If that's the case, are computers even close to reading emotions? You bet.

Computers can now analyze a face from video or a still image and infer almost as accurately as humans (or better) the emotion it displays. It generally works like this:

1. The computer isolates the face and extracts rigid features (movements of the head) and nonrigid features (expressions and changes in the face, including texture);

2. The information is classified using codes that catalog changes in features;

3. Then, using a database of images exemplifying particular patterns of motions, the computer can say a person looks as if they are feeling one of a series of basic emotions – happiness, surprise, fear – or simply describe the movements and infer meaning.

Rosalind Picard is a contagious bundle of excitement when she talks about "Mind Reader," a system developed by her team in the Affective Computing Group at the Massachusetts Institute of Technology in Cambridge, Mass.

"Mind Reader" uses input from a video camera to perform real-time analysis of facial expressions. Using color-coded graphics, it reports whether you seem "interested" or "agreeing" or if you're "confused" about what you've just heard.

The system was developed to help people with autism read emotions, as they have difficulty decoding when others are bored, angry, or flirting. Their lack of responsiveness makes them seem insensitive to others. Ms. Picard's team uses cameras worn around the neck or on baseball caps to record faces, which the software can then decode.

Picard, a pioneer in the field, says she learned a broader lesson from this research: If you can teach a person when to be sensitive to others, you probably could teach a machine to do so as well.

Jeffrey Cohn, a psychologist at the University of Pittsburgh, used his knowledge of the human face in behavior research. Mr. Cohn is among the relatively few experts who are certified to use the Facial Action Coding System, which classifies more than 40 action units (AUs) of the face. He is a man who can spot the inner corners of your eyebrows inching medially toward each other and then rising slightly, and call out: "That's AU one plus four," a combination of action units associated with sadness.

"The face is almost always visible," Cohn says. "People communicate a lot about feelings and thoughts through changes in facial expression."

Together with computer scientists, Cohn is working to get machines to read AUs and then describe which muscles moved and how. Such applications could do what Cohn did when he studied a videotape of a criminal who professed to be distraught about the murder of several family members and tried to pin the blame on someone else. Cohn watched attentively and saw no genuine sadness reflected in the woman's face. Sadness is a combination of AUs that is difficult to perform voluntarily: pulling down the corners of your lips while bringing your eyebrows together and raising them. What the subject did was raise her cheeks to simulate the lip curl. Her brows stayed smooth.

Researchers interviewed for this story concur that emotion recognition appeals to the security industry, which could use it in lie detection, identification, and expression reading. The best results are still obtained in controlled settings with proper lighting and a good positioning of the face. An image from a security camera wouldn't give the software much to work with.

Picard says there is peril in working with "fake data" if this technology is used in security. Yes, machines may be able to read fear, but fear is not necessarily an indicator of bad intentions. For example, sudden elation after a period of depression can also be an indicator of suicidal intent.

Computers work on their small-talk skills

Tim Bickmore wants to teach computers to make small talk.

A graduate of Rosalind Picard's group at the Massachusetts Institute of Technology and now a computer-science professor at Northeastern University in Boston, Mr. Bickmore is studying how human relationships develop over time, and how conversation becomes less formal and more referential to past activity, in order to help computers do the same. Bickmore has created Laura, one in a series of "relational agents," computer programs that adapt to a user's emotional state, engage in small talk, and even remember information from previous conversations.

Bickmore tested Laura in a two-month-long study to promote walking among patients at the Boston Medical Center geriatrics clinic. Bickmore divided 21 people into two groups. Both groups received a pedometer and a brochure. One group was also asked to interact daily with Laura through a touch-screen computer.

In "her" robotic voice, Laura would first try to gauge the emotional state of the person. The response would prompt a change in Laura's facial features, tone, and the things she'd say. She eventually would present the results of that day's walk along with future goals, but she'd also find time for social conversation.

If the person told Laura they were going to walk in the park with a friend, the next day Laura would ask if the friend would join them again. Laura's software also had some humor built into it: If a user asked Laura where she lived, she'd reply: "I just live in this little box." Over the course of the trial, Laura became familiar with people's taste in food and movies.

At the end of 60 days, researchers found that people who talked to Laura walked twice as many steps as those in the control group. One reason could be that Laura successfully established a bond with patients. Bickmore says that when Laura appeared on the screen to greet users, they would often wave back and say, "Hi, Laura."

Bickmore hopes to do more application in real-time and have relational agents act as advisers. "Do I go for the cookie or the salad?" someone on a diet might ask Laura, who would respond accordingly.

Read more about relational agents and their use on Tim Bickmore's site: http://www.ccs.neu.edu/home/bickmore/

You've read  of  free articles. Subscribe to continue.
QR Code to What if your laptop knew how you felt?
Read this article in
https://www.csmonitor.com/2006/1218/p12s01-stct.html
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe