AI writes Yelp reviews that pass for the real thing

As part of their attack method, the researchers utilized a deep learning program known as a recurrent neural network (RNN). Using large sets of data, this type of AI can be trained to produce relatively high-quality, short writing samples, writes the team in its paper. The longer the text, the more likely the AI is to mess up. Fortunately for them, short-length posts were ideal for their Yelp experiment.

They fed the AI a mixture of publicly available Yelp restaurant reviews, which it then used to generate its own fake blurbs. During the second stage, the text was further modified, using a customization process, to hone in on specific info about the restaurant (for example, the names of dishes). The AI then produced the targeted fake review.

Here's a typical post by the robot foodie about a buffet place in NYC: "My family and I are huge fans of this place. The staff is super nice and the food is great. The chicken is very good and the garlic sauce is perfect. Ice cream topped with fruit is delicious too. Highly recommended!"

Not too shabby. Here's another about the same restaurant: "I had the grilled veggie burger with fries!!!! Ohhhh and taste. Omgggg! Very flavorful! It was so delicious that I didn't spell it!!" Okay, so that's not perfect, but we all make errors now and again.

As it turns out, these were good enough to evade machine learning detectors. And, even humans couldn't distinguish them as fake.

These days sites use both machine learning and human moderators to track down spam and misinformation. This approach has proven successful in catching crowdturfing campaigns -- when attackers pay a large network of people to write fake reviews. But, the researchers warn, current modes of defense could come up short against an AI attack method like theirs. Instead, they claim the best way to fight it is to focus on the information that is lost during the RNN's training process. Because the system values fluency and believability, other factors (like the distribution of characters) can take a hit. According to the team, a computer program could snuff out these flaws, if it knew where to look.

The paper warns that in the wrong hands, this type of attack could even be used on bigger platforms, like Twitter, and other online discussion forums. The researchers conclude that it is therefore critical that security experts come together to build the tools to stop it.

Update: A Yelp spokesperson sent Engadget the following statement regarding the company's approach to moderation: "Yelp has had systems in place to protect our content for more than a decade, but this is why we continue to iterate those systems to catch not only fake reviews, but also biased and unhelpful content. We appreciate the authors of this study using Yelp's system as "ground truth" and acknowledging its effectiveness.

While this study focuses only on creating review text that appears to be authentic, Yelp's recommendation software employs a more holistic approach. It uses many signals beyond text-content alone to determine whether to recommend a review, and can not-recommend reviews that may be from a real person, but lack value or contain bias.

We encourage the authors to continue research on this important topic so consumers can continue to rely on review site content."

https://www.engadget.com/2017/09/01/ai-fake-yelp-reviews/