Local Platforms Promote Integrity with Consumer Confidence at Risk

Share this:

Nextdoor is the latest local platform to publish what it calls a Transparency Report, designed to offer information to the public about efforts made to maintain an online community that is free from problematic content. In Nextdoor’s case, the focus is on reducing incidents of hate speech and incivility in order to promote healthy community interaction. This makes sense given Nextdoor’s core value proposition as a place to bring local communities together online. Were these communities to become toxic, much of their appeal to Nextdoor’s desired user base would disappear.

Nextdoor uses technology, developed in collaboration with social scientists, to screen user content before it is posted on the platform. If it detects content that might be hurtful to others, Nextdoor issues a Kindness Reminder, asking the user to consider editing the post before it goes public. For posts determined to contain racist content, Nextdoor has a similar alert called an Anti-Racism Reminder. The company reports that about 34% of users who see these reminders either edit or remove their posts. Posts that make it through the initial screening are subject to community moderation, whereby content can be removed if it is inflammatory or contains misinformation. Nextdoor makes a point of highlighting a very low incidence of problematic content overall, as illustrated in the graphic.

Courtesy Nextdoor Transparency Report

A telling detail in the release announcing the Nextdoor report is this quote from UK Digital Minister Chris Philp: “It’s great to see Nextdoor taking proactive steps to improve transparency. This shows tech platforms embracing the spirit of our groundbreaking new online laws before they come into force.” Philp’s reference is to a UK bill that would let the government “verify independently the accuracy of companies’ transparency reports.” In the US as in the UK, recent government actions, such as the FTC’s fine against online retailer Fashion Nova for misleading practices in the publication of online reviews, have put companies on notice that they would do well to act in advance of possible legislative or regulatory penalties related to the integrity of user-generated content.

Nextdoor’s report takes its place alongside similar public relations efforts on the parts of Google and Yelp, who have both recently issued reports (though not for the first time) detailing their content moderation practices. Because user-generated content on Google and Yelp centers largely on reviews of local businesses, the Google and Yelp reports are about each company’s efforts to combat fake reviews as well as reviews that violate content policies.

There are signs that the PR story is rosier than the reality. According to a recent NBC news story, groups on Instagram and Facebook actively solicit paid reviews from members of Yelp’s Elite Squad, advertised by the company as high-volume contributors who are especially trustworthy. One Elite reviewer said he was paid $30 to post a review he didn’t write for a moving company he’d never hired, and another claimed to be part of an online chat group where a few thousand Yelp Elite reviewers were offered $25 to $50 for each fake review.

Google, too, showcases high-volume content contributors on Google Maps by means of its Local Guides program, modeled in part on Yelp’s Elite Squad. (The Elite Squad, launched in 2006, has an undisclosed number of members; Local Guides, launched in 2015, has 150 million worldwide participants as of March 2021.) To become a Local Guide, users must agree not to “impersonate another person or entity” or to “submit fake, falsified, misleading, or inappropriate reviews, edits or removals.” Google implies Local Guides are trustworthy in promoting them as contributors relied upon by millions of users for their local knowledge.

And yet, according to review fraud expert Curtis Boyd, “Unfortunately, it’s very easy to farm Local Guide accounts, and many of them are fake. I don’t think reviews from Local Guides are more trustworthy than reviews from ordinary Google reviewers.”

Boyd, who founded the Transparency Company to combat fraud in local reviews, uses sophisticated software to ferret out fake reviews by looking at patterns in user profiles, such as the distance between businesses reviewed by the same person, as well as patterns in the content of reviews, such as similar authorship styles in reviews written under different user names. According to his company’s analysis, a remarkable 15% of all Google reviews show signs of being fraudulent.

Consumers are both reliant on reviews and increasingly aware of the prevalence of fraud, according to a recent survey from BrightLocal, which found that 98% of consumers read reviews of local businesses, and that 62% believe they have encountered fake reviews. (Boyd’s statistics suggest the number is much higher than consumers realize.) Suspicion amongst consumers as to the integrity of reviews may be behind a drop in trust detected in the same survey, where only 49% of consumers trust online reviews as much as personal recommendations, compared to 79% just two years ago.

Courtesy BrightLocal Local Consumer Review Survey 2022

The question is why publishers aren’t more stringent in confronting fraudulent content. Transparency reports and content moderation policies notwithstanding, those like Boyd who fight review fraud on a daily basis still see significant problems in the local ecosystem — problems that exhibit patterns that companies like Google could presumably detect as easily as a third-party watchdog. Some allege that fake reviews benefit publishers by increasing overall user activity and have even spotted reviews for sale via Google ads — this despite the fact that a single fake review carries a potential FTC fine of $46,517.

But that fine would be levied on the reviewer, not the publisher, bringing us to the other obvious reason publishers are treating fraudulent content as a PR problem. They stand to lose more from consumer perception that fake reviews are rampant than by the existence of specific fake reviews. If that perception grows — and it seems to be growing — publishers may be forced to take real action, especially so if scrutiny from the FTC and other governmental bodies continues to provide its own pressure and to increase consumer awareness.

Tags:
Damian Rollison is Director of Market Insights at SOCi.