Paid product reviews are spreading on the Internet. Can technology help?
Phil Marden/The Christian Science Monitor
Skeptical of advertisers' sales pitches, shoppers are putting more trust in online consumer reviews of products from electronics to pet food.
With rising trust, however, has come corruption. On Amazon.com, for instance, a suspiciously high 80 percent of reviews give four stars or higher, says Bing Liu, a computer scientist at the University of Illinois at Chicago who studies the inauthentic-review problem. Since most consumers don't write reviews unless they have criticisms to share, "who on earth are these people who are so happy?" he asks. He estimates that about 30 percent of Web reviews are fraudulent.
One example: Staffers at Reverb Communications, a Twain Harte, Calif., public relations firm, posed as consumers and praised clients' products at the iTunes store before settling Federal Trade Commission (FTC) charges of deception in 2010. Now, organizations are battling back with new technologies to detect fake reviews.
"It's basically an arms race," says Mr. Liu, whose university team is building software to catch fake reviewers. "We have algorithms [to identify false reviews], and then these guys are inventing ways to avoid these things."
At stake is the integrity of a 21st-century confidant. Seventy percent of global consumers trust online consumer reviews, up from 55 percent four years ago, according to a Nielsen survey released earlier this year. Meanwhile, the fraction that says it trusts paid television, radio, and newspaper ads has shrunk to just 47 percent.
Spotting fake reviews means discerning signs of a faker. One who's gushed about multiple refrigerator models at various websites probably hasn't bought and tested them all, Liu explains, but is instead being paid to praise. Likewise, when hotel reviews come from guests who received discounts in exchange, their "Love! Love! Love!" should be taken with a grain of salt, salt, salt.
But researching each reviewer's background would require more time and patience than most readers have. Even the FTC, with some 60 staffers who police advertising, lacks resources to enforce rules governing online reviews. The agency instead focuses on educating businesses about legal boundaries.
"We're never going to be able to stop all false advertising," including false consumer reviews, says Mary Engle, the FTC's associate director for advertising practices. "It would be great if there were some technological innovation that would help solve the problem, or at least put a dent in it."
Faced with human limitations, pioneers are betting technology can fix what it helped create (or at least exacerbate).
Consider Yelp.com, a site where readers find more than 30 million consumer reviews of everything from restaurants to doctors. Reviewers must register, which helps weed out robots, according to Yelp. It discards apparent shills and malicious attacks on competitors, as well as reviews that seem to have been solicited by business owners. Some legitimate reviews may be tossed out in the process, since the filter isn't perfect, Yelp says.
At the University of Illinois at Chicago, researchers are targeting reviewers rather than reviews. Programs in development track a reviewer's Internet Protocol address to see what else he or she has been reviewing. Is that person generating dozens of reviews on various sites every week? Does every review from this particular source crow – or pan? Programs sniff out suspicious patterns by sifting through data so voluminous that only a computer could do it.
Researchers at the Massachusetts Institute of Technology (MIT) Media Lab in Cambridge are taking a different tack: authentication. Projects afoot at the lab's ID Cubed initiative would allow users to establish a verified profile with corroborated information. The open-source application would provide tools so that users could effectively vouch for their identities across the Web, whether they're writing consumer reviews, taking part in online chats, or applying for a mortgage.
"People have been very naive and trusting initially, and then they get taken" by deceptive reviews and imposters, says John Clippinger, an MIT Media Lab research scientist and executive director of ID Cubed. "So now you're seeing the development of services that are vetted so that [reviewers'] reputations actually mean something."
Until tech solutions arrive, con-sumers need strategies for finding trustworthy reviews. Try relying on large samples, says Linda Sherry, director of national priorities for Consumer Action, a San Francisco-based nonprofit advocacy group. If dozens or hundreds of reviewers are raving, then the consensus might be more trustworthy than a small handful of glowing opinions. And don't worry too much, she adds, because the market has ways of weeding out troublemakers.
"You can't lie forever" without being found out, Ms. Sherry says. "We're all the cops on the Internet in a way. It's our eyes that really keep it honest – if it can be."