Researchers train computers to 'read' emotions, which could help with teaching, security, people with autism – and cranky users.
Not even Dan Brown and his Da Vinci codebreakers dared tackle the mystery of Mona Lisa's smile. But Nicu Sebe, a computer vision researcher at the University of Amsterdam, Netherlands, did. He processed the enigmatic portrait with his "emotion recognition" software and – eureka! – Mona Lisa was happy (83 percent) and slightly disgusted (9 percent).
Mr. Sebe valiantly pursued other mysteries. He decoded the image of Che Guevara that adorns T-shirts worldwide and proclaimed that El Comandante was mostly sad. And the fellow in Edward Munch's "The Scream"? He's much less frightened than surprised, Sebe declares.
Faces reveal emotions, and researchers in fields as disparate as psychology, computer science, and engineering are joining forces under the umbrella of "affective computing" to teach machines to read expressions. If they succeed, your computer may one day "read" your mood and play along. Machines equipped with emotional skills could also be used in teaching, robotics, gaming, sales, security, law enforcement, and psychological diagnosis.
Sebe doesn't actually spend research time analyzing famous images – that's just for fun. And calling Mona Lisa "happy" is not accurate science, but saying she displays a mixture of emotions is, Sebe says. Why? Because to accurately read an emotional state, a computer needs to analyze changes in expression against a neutral face, which Da Vinci did not provide.
If that's the case, are computers even close to reading emotions? You bet.
Computers can now analyze a face from video or a still image and infer almost as accurately as humans (or better) the emotion it displays. It generally works like this:
1. The computer isolates the face and extracts rigid features (movements of the head) and nonrigid features (expressions and changes in the face, including texture);
Page 1 of 4