Strategies, research, industry trends — your pulse on the marketplace
The   Bazaar   Voice
Strategies, research, industry trends — your pulse on the marketplace
moderating content

Moderating content is essential for business today because user- and influencer-generated content like customer ratings and reviews is only as valuable as it is trustworthy. Fake reviews completely tarnish both a brand’s reputation and the real reviews on the brand’s products, making them useless.

According to our research, 75% of consumers said that if they notice a fake review for a product on a site, it would impact their trust in reviews for other products on the same site. This is why we at Bazaarvoice work so diligently to screen out negative reviews before they are even published. 

After years of experience moderating content, we as consumers know that it can be simple to discern fake reviews by their content. According to our survey of 10,000 global shoppers, respondents said the top ways they spot fake reviews are:

  1. Multiple reviews with similar wording (56%)
  2. Review content doesn’t match the product (53%)
  3. An overwhelming number of 5* reviews (36%)
  4. Grammar errors and misspeling (35%)
  5. Only a rating with no written review or imagery (31%)

These can all be indicators of a content’s true nature and give you a heads up that something may be suspect. However, text review alone is not enough to catch all fake reviews. Because of this, we use both text patterns and data signals to monitor behaviors similar to what you would see with financial transactions.

Using text and data signals

Often, fake reviews are identified by much more than what’s said or how it was said, but with what information that can be gathered using other data signals about a person and the review/s they left. When moderating content, we look for what patterns or behaviors do not belong.

For instance, if the same person reviews an item in two different countries, that may not be suspect on its own. But if we see they are leaving them in rapid succession when they couldn’t possibly be in two places at once, then we know they’re trying to provide false information. So we’ll take their content down and block them.

This isn’t unlike the technology used to ensure credit card usage and purchases are made by the true owner. This is an extra level of machine moderation that goes beyond the human moderation that we also use, in which we have hundreds of people manually moderation reviews as well.

This additional layer of security helps tremendously in our ease in efficiency of validating reviews, simply because of the sheer amount that we receive, and because nefarious actors who leave fake reviews are constantly innovating.

The digital pattern of fake reviews 

Because fraudsters are continuously evolving, we never consider our moderation tactics to be final. We’re always working to educate ourselves while working with the most innovative vendors to identify new trends. What’s probably the biggest pillar of modern review moderation is that fraud is cumulative. 

By this, I mean that one review won’t look like much or appear out of place. One review will also rarely skew a star rating or the overall perception of a product. Unless it is the first review ever left for that product — and even then, it won’t be for long.

We’re focused instead on finding the kind of fraud that creates widespread unfair understanding about the product shoppers will be receiving. And typically one random spare review won’t do that. 

Moderating sponsored content

When consumers are looking at user-generated content such as ratings, reviews, and customer photos and videos, they’re right to assume the content comes from shoppers just like themselves, with no agenda or stake in the brand or product’s performance. They should be able to assume that what they’re reading or seeing is unbiased, not paid for, and from a neutral third party.

These assumptions allow shoppers to know they’re getting an authentic, genuine, and accurate description about someone’s opinion or experience. 

This is exactly why influencers are legally required to disclose that a product they post about or review is an #ad when sponsored by a brand. Now, authoritative bodies are imposing these same regulations on everyday consumers posting reviews.

In said reviews, consumers expect brands to disclose any relationship that could influence a reviewer’s ability to be impartial. This includes circumstances such as:

  • When the poster received an incentive to leave a review. This includes free products, discounts, an opportunity to be part of a sweepstakes, or other potential items of value in exchange for an honest review. Really anything that would drive a consumer to write a review when they likely wouldn’t have on their own accord 
  • If the poster has material relation to the brand. This might mean they are an employee, partner or vendor of the brand, or someone whose livelihood is dependent on the product or brand’s success
  • If the poster has a close personal connection to the brand. While you don’t need to disclose if your husband works in-store for a big retailer from which you bought the product, you’d need to disclose if he husband is the CFO of the brand that sells the product

When in doubt, more disclosure is better. 27% of consumers think brands with fake content should be fined up to 30% of their revenue. So it’s certainly something shoppers are thinking about — and dislike.

How brands should be moderating sponsored content

The most important aspect is that brands need to be clear and conspicuous about their sponsored content. All disclosures should be obvious to a consumer. They shouldn’t have to do the same amount of sleuthing to find a review disclosure as say, when they’re “researching” about someone they met online that they’re about to go on a first date with. It should require no additional clicks beyond where they saw the original review. 

For example, even if the review is long and is at risk of being cut off with a “see more text” option, all disclosures about that review need to appear before the text cut off. Do you have a cute name for your rewards or sampling program through which shoppers submit incentivized reviews? Don’t assume a new consumer knows what that cute name is or means and use it in the disclosure (“This reviewer is a *CuteSamplingProgramName* Member!”).

Be obvious about how the review is incentivized. This is relevant to reviews on product pages as well as all social media posts.

One thing that unfortunately isn’t immediately considered iwhen it comes to disclosing sponsored content is ADA compliance. When including images or videos, make sure your disclosures are clear and specific, while also being written in addition to auditory. 

Work with the right content moderating partner

Incentivized reviews are not less valuable than organic ones, but they are less trusted if they’re not properly badged with obvious and understandable disclosures. To make sure your shoppers know that your content is authentic and trustworthy, you need to be as transparent as possible. Including disclosures on incentivized reviews is not only one of the best ways to achieve that, but has become legally required, as well. 

Not all fraud looks like fraud at first. But once it’s discovered, at Bazaarvoice we’ll remove all content associated with the user and block future submissions as best we can. Additionally, we’re very selective about who we partner with to make sure any content partners align with our values.

The Bazaarvoice Authentic Reviews Trust Mark is the gold standard seal of approval to prove your content is genuine.

Learn more about Bazaarvoice moderation and authenticity here.

Want the latest content delivered straight to your inbox? Join our monthly newsletter.