Online Reviews and the Problem of Authenticity

It isn’t every day that the Federal Trade Commission takes notice of the digital marketing industry, but that’s what happened several days ago when the FTC announced a $4.2 million fine against online fast-fashion retailer Fashion Nova, related to the allegation that Fashion Nova made use of the Yotpo platform in order to selectively publish only positive reviews of its products.

In addition, the FTC sent warning letters to ten companies who help businesses manage online reviews, including companies that help local businesses request, analyze, and respond to reviews on platforms like Google. The letters, acquired by Mike Blumenthal of Near Media via a Freedom of Information Act request, cite excerpts from company websites describing service offerings that may fall foul of FTC guidelines, and direct the recipients to “terminate any services that allow for or result in consumer deception.”

The FTC has also issued updated guidelines to help review platforms understand what is considered a violation of policy. The cited practices form an extensive list but can be summarized as follows by saying businesses and the review platforms they service should not:

  • Ask for reviews selectively (only from satisfied customers) or suppress negative reviews from publication, commonly called review gating
  • Offer incentives to reviewers that are not clearly disclosed, or that stipulate the review must be positive
  • Ask for reviews from people who have no experience with the relevant products or services
  • Edit negative reviews to make them sound more positive
  • Publish reviews in such a way that positive reviews are more prominent than negative ones

Furthermore, the FTC advises that review platforms should:

  • Clearly and publicly describe how reviews are collected, processed, and displayed, and how ratings are calculated
  • Implement procedures to identify and take action against fake or suspicious reviews and to respond to user reports of fake reviews

What the FTC’s reviews decision means for reputation management

These developments from the FTC mark a significant moment in the history of online review management. Practices such as review gating have been relatively widespread in the industry for years, despite warnings such as this Help Center update published by Google in 2018: “Don’t discourage or prohibit negative reviews or selectively solicit positive reviews from customers.”

Such biased practices have flourished, despite Google’s warning, for three reasons: first, many businesses fear that even a single negative review will cause irreparable damage; second, marketing platforms are loath to go against the wishes of their clients; and third, the threat of negative consequences has seemed distant and unrealistic.

Now that the federal government has weighed in, it’s very likely all of that will change — whether because marketing platforms will clean up their practices on their own initiative in order to stay out of the FTC’s crosshairs or because business clients will insist platforms comply with the rules so they don’t risk becoming the next Fashion Nova.

To be clear: the FTC is not saying businesses can’t ask customers to leave them a review, as long as they do so in a manner that conforms with the other stated guidelines. (Google also encourages businesses to ask for reviews, though Yelp has a longstanding policy forbidding this practice, under the rationale that asking for reviews in itself tends to bias responses in favor of the business.)

Review integrity is, of course, a broader topic than this. Outside of businesses soliciting reviews and marketing platforms enabling them to do so, we have review publishers such as Google and Yelp who are well known for their efforts to combat review fraud and, on Google’s part especially, for the prevalence of fake reviews despite the company’s apparent best efforts.

Google, indeed, seems to have published a recent blog post, entitled “How Reviews on Google Maps Work” as an indirect response to the FTC’s actions. The post outlines Google’s multiple lines of defense against reviews that violate its content policies, including software that uses machine learning to prevent reviews from being published in the first place if they contain inflammatory language or exhibit various red flags indicating suspicious activity, such as cases where a business receives a large number of all-positive or all-negative reviews within a narrow timeframe. In addition to its automated processes, Google also makes use of human moderators who decide whether to remove reviews flagged by Google users.

Yelp has also just issued the most recent annual report on its own efforts to combat misinformation, claiming that 6% of the 19.6 million reviews written on the platform in 2021 were removed due to policy violations. According to the report, more than 15,500 reviews were removed from Yelp for criticizing the business’ Covid policies, and 29,300 reviews were removed because of racist or discriminatory content. Yelp says it placed over 1,850 notices on business profile pages warning consumers of attempts by the business to manipulate reviews.

The effort to maintain the authenticity of online reviews is multi-pronged and each stakeholder has a different point of view. Businesses want to protect their reputations; marketing platforms want to make their clients happy; review publishers want to be known for providing value to consumers; consumers want to know they can trust in the reviews that form an important part of their decisions to choose one business over another. As usual, the customer is king in this situation: if the integrity of reviews is seriously threatened, the entire ecosystem weakens and the power of every other constituent — of publishers to attract users, of businesses to convert customers, of vendors to win clients — diminishes.

Tags:
Damian Rollison writes the Streets Ahead column for Street Fight. He is Director of Market Insights at SOCi and can be reached via Twitter at @damianrollison. SOCi is the publisher of Street Fight.