Policy paper

Tackling Fake and Misleading Reviews

In this paper Which? sets out how platforms can take a Safety by Design approach to tackle fake reviews and ensure that their content moderation process are sufficient to prevent consumer harm
32 min read
Fake review on phone screen

Summary

Which? has investigated online consumer reviews for the past six years and has consistently found obvious fake and misleading reviews on a range of online platforms. The Government’s research found that roughly one in seven reviews on e-commerce platforms, in popular product categories are fake. These reviews cause real harm to consumers with our research showing that fake and misleading reviews make consumers more likely to waste money on inferior products that don’t meet their needs. 

Drawing from our years of experience investigating and researching fake and misleading reviews, as well as engagement with a number of review hosting platforms, in this paper we set out a way forward in tackling fake and misleading reviews. We summarise the actions platforms currently take to tackle fake and misleading reviews, the ongoing problems we find and present recommendations for the steps platforms should take to protect consumers.

Platforms must properly assess the risks of the business model they choose and the design of their service to determine the potential harm it poses to consumers. This must inform the decisions they make about the potential mitigations to prevent fake and misleading reviews. These can include requiring reviews to be linked to verified purchases, automated pre-publication checks, user reporting of posted reviews and attempts to tackle the ecosystem of review brokers.

To ensure that there is meaningful action against fake and misleading reviews more is needed from industry, the Government and regulators:

  • Platforms must properly assess the risk that their system design and business model poses in relation to fake and misleading reviews and take reasonable and proportionate measures to ensure the reviews they host are genuine.
  • The Government must:
    • ensure that there is legal certainty that ‘hosting reviews without taking reasonable and proportionate steps to ensure they are genuine’ is a criminal offence alongside adding offences on the buying and selling reviews
    • ensure that the Consumer and Markets Authority (CMA) has sufficient powers to pursue bad actors.
  • The CMA needs to use the powers it is given through the Digital Market Competition and Consumers Bill and associated secondary legislation to protect consumers from fake and misleading reviews by taking meaningful action against platforms that fail to take the necessary steps.

Introduction

The ability for consumers to share feedback on products with one another can make markets work better. Genuine user reviews provide a valuable service, helping consumers choose between products and services where otherwise they might have insufficient information. Consumers can find the products that suit them, and the best businesses with the best products get rewarded for their efforts. 

Unfortunately, fake and misleading online reviews have proved an enduring problem. Many online platforms hosting reviews of products and services are still inundated with fabricated or misleading customer feedback. Indeed, research from the Department of Business and Trade found that for widely used ecommerce platforms 11-15% of reviews in the categories they assessed were fake.

Fake and misleading reviews harm consumers. Our research has shown that fake and misleading reviews make consumers much more likely to choose poor-quality products, risking them wasting money on products which don’t meet their needs. In some cases this can have very serious consequences. One consumer told us of a digital thermometer boosted by fake reviews preventing them from knowing whether they had Covid-19 symptoms. Another told us of their experience of receiving poor legal representation from a firm manipulating reviews.

Rather than helping to make markets work better, fake and misleading reviews put consumers at risk of being misled and good companies at risk of being undercut. This undermines the principles of fair competition which make markets work and incentivise businesses to innovate and invest in making their products better. Allowing practices like fake and misleading reviews to flourish will ultimately be to the detriment of Britain’s economic performance.

Platforms that host customer reviews play a key role in tackling the problem. Both in preventing the publication of fake and misleading reviews in the first place and in detecting and moderating bad behaviour where it occurs. Done effectively, this can both reduce the incidence of fake and misleading reviews present now and increase the barriers to posting them in the future.

Platforms are already taking action in many areas, with much of this to be applauded. However, it is still clearly too easy in many instances for companies to commission fake and misleading reviews to artificially boost the reputations of their products. 

This paper provides an overview of the methods platforms that host reviews use to tackle fake and misleading reviews, some of the issues with those methods and recommendations for improving them. It has been informed by Which?’s years of experience investigating and researching fake and misleading reviews as well as a series of background discussions with review hosting platforms including Which?’s Trusted Traders service that hosts reviews. 

The business model and size of individual platforms may require tailored approaches to be taken to tackle fake and misleading reviews. The exact details of how platforms tackle fake and misleading reviews will also evolve over time as those involved change tactics and platforms’ technology improves.

Platforms should be expected to design their platforms to have appropriate systems to prevent users from being misled by fake and misleading reviews. Their systems should allow consumers to report fake and misleading review activity and to share the resulting data with the regulator and other hosting sites to better tackle review brokers. Platforms could also tell consumers where they may have been misled by fake and misleading reviews and inform them of their possible rights to redress.

In the following chapters, we look at particular approaches that platforms can take to address fake and misleading reviews, including risk assessment, verification, content moderation, off-platform activity and auditing. We also look at the issue of manipulated reviews, and the types of interventions platforms should be making to tackle these specifically. 

Definitions

To aid in identifying the best solutions to the problem of fake reviews, it can be helpful to break down the issue into two distinct categories, both of which can mislead and harm consumers:

  • Fake Reviews - This is where there has been no genuine first hand experience of the product or service by the reviewer. These reviewers are often accounts generated primarily for the purpose of leaving fake reviews or where a pre-existing account has been taken over for the purpose of leaving fake reviews.
  • Misleading Reviews - This is where there has been a real experience of a good or service but that the review is misleading in one of two ways:
    • Incentivised Reviews - This is where there has been a first hand experience of the product or service by the person leaving the review but that the reviewer has received an incentive in a way that biases their review.
    • Manipulated Reviews - This is where genuine reviews are misused to mislead consumers by the artificial removal of accurate reviews, the addition of inappropriate reviews, disincentivising consumers from posting negative reviews or only soliciting reviews from people who have had a positive experience.

Risk Assessment

Why it matters:

Risk assessments are an essential tool for determining if a platform is taking sufficient action to tackle fake reviews and the appropriateness of their systems to protect consumers should derive from the risks identified in their assessments.

One useful way to think about the design of an online platform is to look at the risks of harms that each element of the design could create for its users and how the design of the system and processes could be used to mitigate that harm. Offering users the ability to leave reviews creates the risk of users leaving fake and misleading reviews. That risk can be mitigated by features to prevent and detect fake and misleading reviews such as verification or automated detection systems. The process for evaluating these risks and working out appropriate mitigations is through having risk assessments. Risk assessments have a role in both assessing changes to consumer-facing features (e.g. does adding photos to reviews decrease or increase fake and misleading reviews?) and changes to safety features (e.g. does requiring users to verify their email address substantially reduce fake reviews?).

The Digital Trust and Safety Partnership (an industry body that seeks to promote best practice in trust and safety) states that best practice should include identifying, evaluating and adjusting for risks in the development of new features. This could include developing insight and analysis capabilities to understand the patterns of abuse and identify preventative mitigations that can be integrated into products and using in-house or third-party teams to conduct risk assessments to understand potential risk. Ofcom has made clear that risk assessments will be key for its online safety regulation and has begun publishing the risk assessment processes of video sharing platforms it already regulates. 

An effective risk assessment would consider the business model of the review hosting platform and the potential ways that a platform design being considered could be misused. This risk assessment would form the basis for how platforms implement each of the mitigations discussed in this paper and presented in the accompanying figure. 

Figure 1: Overview of the methods for tackling fake reviews

Figure 1: Overview of the methods for tackling fake reviews

What platforms do:

Platform features can potentially incentivise or facilitate incentivised reviews. One past example is Amazon’s top reviewer programme. This created a leaderboard for accounts that left the most reviews on Amazon. Investigative reporting from multiple outlets has linked the top reviewers on Amazon’s leaderboard for reviewers who have reviewed the largest number of products to a large number of suspected incentivised reviews. External consultancies also recommend that sellers contact these top reviewers to solicit reviews. It appears that this programme could have been effectively providing a means for suppliers of fake reviews to advertise themselves to potential purchasers by highlighting the number of reviews they had left. Amazon discontinued this programme in late 2022.

When a review hosting platform is considering launching a new feature or a major design change, the Trust and Safety team (which is the team responsible for protecting users and oversees content moderation systems) is typically consulted as part of the process. However, the extent to which they are involved appears to differ between platforms. This ranges from an informal consultation process with Trust and Safety acting as one of many internal stakeholders, to a formal sign off process where a feature cannot be launched without the explicit approval of the Trust and Safety team with key risks documented and tested.

Where it’s not working:

The risk of consumer harm from fake and misleading reviews is currently too high on many platforms. Research from the Department of Business and Trade found that for widely used ecommerce platforms, 11-15% of reviews were fake in the categories they assessed. Our research and investigations have shown the negative impact of these reviews on business and consumers including consumers choosing lower quality products, businesses forced to invest in managing their online reputation against fake negative reviews and less efficient markets for products and services. In our view, the wide ranging presence of fake and misleading reviews and the consumer harm caused by this shows that review hosting platforms are not adequately assessing the risk to consumers and providing appropriate mitigations to protect them from fake reviews.

Recommendations

Platforms should conduct thorough risk assessments assessing the impact of their system design on enabling fake and misleading reviews and the mitigations they should have in place. Platforms should ensure that they have Safety by Design, the process of designing to reduce the risk of harm to those who use the service, as a core part of developing new features. Platforms should have clear risk assessment processes in place for assessing new features that could potentially increase the number of fake reviews, facilitate incentivised reviews or be used to manipulate reviews. Platforms should also be expected to conduct a new risk assessment when they become aware of evidence that an existing feature is being misused in a way not anticipated in their original assessment. 

These risk assessments should show how the risk that platform faces have led to their approach to verification, before posting content moderation, after posting content moderation, redress mechanisms, off platform activity and continuous improvement activity. The risk assessment should show how these mitigations are adequately designed to tackle this risk of fake reviews created by the platform’s design and business model. It should clearly outline how design elements which can be misused, such as review merging or unverified reviews are counteracted with appropriate mitigations. 

A failure to conduct a thorough risk assessment and mitigation should be taken by regulators as a key sign that the platform has not taken reasonable and proportionate checks to ensure that the reviews they host are genuine. 

Stopping fake and incentivised reviews before they are published

Verification

Why it matters:

Verification describes a range of methods to check that a user providing a review is a genuine consumer and has used the relevant product or service. This can be a useful way of tackling fake reviewers. Alongside the common sense intuition that verifying a user increases the barriers for those that wish to create fake reviews, there is a range of evidence that suggests that unverified reviews are more likely to be fake. Which? investigations have found suspicious patterns involving large numbers of unverified reviews appearing in a short period of time that were likely to be fake. Trustpilot’s transparency report shows that consumers were more likely to flag unverified reviews than verified reviews. ReviewMeta’s analysis of Amazon data in 2017 found that unverified reviews were substantially more likely to be deleted. Data from ReviewMeta analysed by Which? showed that the number of unverified reviews increased substantially between 2018 and 2019 alongside suspicious review activity.

What platforms do:

Different platforms have a wide variety of criteria to verify whether a user is an authentic reviewer. Approaches vary:

  • Some methods focus on verification of the user (for example requiring an email address, a form of Government ID, or that the user has previously spent money on the site)
  • Other methods focus on verification of a specific transaction (for example using a unique identifier to tie a user to a specific transaction or a direct invitation system). 

Platforms may also use other verification methods to secure accounts (such as two factor authentication or checks on ip address, internet service provider, or device ID), prevent bots or duplicate accounts, which as well as protecting the platform from other threats can help reduce fake reviews. 

Some platforms allow both verified and unverified reviews on their platforms using labels to distinguish between the two.

Where it’s not working:

Which?’s investigations have found brokers using old accounts with a genuine history. This approach would appear to evade many account verification systems although it would not evade verification of specific transactions.

There is not a clear or consistent labelling of verified users between websites. It’s often not clear what level of verification a reviewer has received with terms used differently on different platforms. Whilst there are valid reasons for using different types of labelling, the differing language used to describe similar processes could lead to consumer confusion.

Other labels which do not include verification, for example Google’s Local Guides programme or Yelp’s Elite programme may also lead consumers to believe those users have a greater level of verification than they actually do. Review brokers sell reviews from these types of higher status accounts.

There are many valid reasons why consumers providing a review may not wish to go through verification processes. They may have privacy concerns about giving personal information to the platform and may find more complex verification methods too difficult or time consuming to complete. Consumers can also have negative experiences with businesses that lead them to not complete a verifiable transaction which can still be valid information for a review. Difficult verification requirements may put users off from leaving reviews and therefore deter platforms from introducing verification requirements. 

Verification is useful at detering fake reviews with accounts created for that purpose. However, it is largely ineffective as a method to counter incentivised or manipulated reviews because these involve genuine users who have used the product or service.

Recommendations

Platforms should publicly explain why they have chosen their approach to reviewer verification and provide evidence that it is effective and balanced against user needs for simplicity, privacy and anonymity. This should include making it clear to consumers how they use and store data required in the verification process. 

There are some circumstances where there are substantially fewer trade offs to requiring verification. For example, for some online retailers which also have reviews, the platform needs no extra data to verify a transaction, would need no extra input from the consumer and would not be excluding reviews from people who have a substantial interaction with the business (on which a review could be based) without making a purchase. In those circumstances platforms should be expected to require verification. 

There are valid reasons to not require verification or have less secure verification. Where platforms choose to take a less secure and well evidenced approach to verification then they should adapt their design in other ways to compensate. Either they should not take risks with other elements of their platform design or they should engage in stricter content moderation with a lower threshold for justifying the removal of content, than other platforms that use more secure and robust verification processes. We discuss pre and post publication content moderation below. 

More research is needed to establish how users perceive verified reviewers, whether they understand what existing labels mean and whether verification labels affect their decisions.

Content Moderation Pre-Publication

Why it matters:

Part of protecting consumers from fake and misleading reviews is detecting them before they are published. This is preferable to any attempt to remove them after publication as by that point the harm can already have happened. Having a strong prevention system in place ensures that the bad actor sees no benefit from their attempt. 

What platforms do:

Once verification systems have determined whether someone is authorised or endorsed in leaving a review, platforms have a variety of different methods for checking reviews before they are published. These checks happen once a review is written and submitted to the platform. Large platforms use automated systems built using machine learning to spot possible fake reviews before they are published. These systems analyse the content of the review and the behaviour of the reviewer to determine if something is a fake or misleading review. These platforms are increasingly concentrating these automated systems on the behaviour of the reviewer and how that differs from typical reviewer behaviour. Research from the Department for Business and Trade found that the best way of predicting the prevalence of fake reviews was monitoring the clustering of review activity. This can also be seen in the technology solutions being sold to review hosting sites which focus on detecting suspicious reviewer behaviour. 

These systems can remove a review, send the review to a human moderator to check, ask the reviewer to verify themselves in some way or approve the review for publishing. According to many platforms these automated checks remove the majority of fake reviews although there is no way to externally verify this. 

Smaller platforms are more likely to rely on human checks or simple queries searching for specific terms. For example Which? Trusted Traders commits itself to moderating 100 percent of its reviews to ensure they comply with its terms and conditions. Each review is sent to the moderation team before it goes live on the service. 

Some platforms also introduce a substantial delay between the submission of a review and its publication to disrupt fake review networks by reducing their ability to quickly alter review scores. 

Where it’s not working:

Which? research has repeatedly shown across a variety of platforms that it is still consistently possible for fake reviewers to evade these checks and publish fake reviews. This includes reviews that are obviously fake to a human. For example, in a recent investigation Which? researchers placed a review through a review broker that stated in French in the review itself that “This is a fake company and fake reviews.” This investigation managed to place fake reviews across a wide variety of major platforms. Review brokers told Which? that it was harder to place fake reviews on some platforms when compared to others but that ultimately it was possible to post fake reviews on each platform requested.

The automated systems used are hard to scrutinise by external organisations. Platforms release limited information on the indicators of fake review activity that these systems use in order to not aid fake reviewers trying to evade the systems. This makes it hard to tell how effective these systems are, how they can be improved and whether they are biased against certain types of consumers.

Which?’s investigations have shown that even as external observers, with less access to data than the review hosting platform, we are able to create automated systems to detect suspicious activity on review hosting services. For example, in a recent investigation, Which? was able to create a model for detecting highly suspicious reviews published on the Google Play store and the Apple App store by looking for behaviour like the bulk upload of reviews, suspicious review length and the prevalence of 5 star reviews. We were able to devise a method to detect fake review activity at scale on these platforms that had passed through their systems. This suggests that the systems used by these services are insufficient and should be able to do a better job of spotting suspicious review behaviour with the additional data they have available to them.

A further concern is that excess delays to publishing reviews introduced to disrupt fake review networks could potentially prevent consumers from being properly informed about faulty or unsafe products. It’s unclear how effective these delays are and whether they generate sufficient benefit to outbalance the risk they create for consumers. 

Recommendations

While these automated pre-publication reviews play an important role in safeguarding consumers from fake reviewers they are not currently sufficient to stop this problem and appear to be substantially less effective at stopping incentivised reviews. Pre-publication checks should be supplemented with robust post-publication procedures to tackle fake and misleading reviews published on their sites. 

Pre-publication checks should be made more impactful by continuous learning from post-publication content moderation and from data gained through effective collaboration between platforms as discussed below under “Off Platform Activity”.

Whilst external stakeholders like Which? lack access to the confidential technical information necessary to call for universal minimum standards, a regulator should be given sufficient powers to audit these systems and suggest improvements where necessary.

Tackling fake and incentivised reviews after publication

Content Moderation Post-Publication

Why it matters:

Where fake and misleading reviews make it past the systems designed to prevent them being published it is important to have ways to detect and report these. These post publication checks have a dual purpose of removing fake and misleading reviews that are currently harming consumers and to help improve platforms prevention systems by learning about the fake and misleading reviews that successfully evade them. 

What platforms do:

Platforms offer consumers and businesses tools for reporting potentially fake or misleading reviews. It can be difficult for consumers to spot many fake or misleading reviews. However, some reviews are obviously fake or misleading. Which?’s investigations have repeatedly found fake and misleading reviews from publicly available information. Consumers can report these obviously fake and misleading reviews. These reported reviews can be checked by an automated system, reviewed by a human or prompt a request for more information from the reviewer. Platforms also have internal investigation teams that search for fake and misleading reviews that are live on their service. In addition some platforms re-scan their existing reviews as they update their automated review checking systems. 

Where it’s not working:

Which? investigations have shown that reporting processes are opaque and appear to be ineffective from the perspective of a consumer using the system. For example, Which? found in a recent investigation when it reported obviously inappropriate reviews (including reviews that were for other products) on Amazon that it was unclear what action, if any, was taken in response to those reports and if not why not. Some listings with fake reviews were removed, others listings had some reviews removed but had many other obviously inappropriate reviews remaining. This is a disempowering experience for consumers and may discourage them from reporting reviews in the future.

Platforms have suggested that user reports tend to be inaccurate with only a few correctly identified fake reviews amongst a large number of incorrectly identified genuine reviews.

Recommendations

Pre-posting moderation and verification appear to be less effective at tackling incentivised reviews. Those methods can have some effect at reducing fake review accounts but are poorly equipped to tackle the misleading reviews. This would suggest that post-publication review should be a crucial way of finding fake and misleading reviews. However, neither platforms or consumers are served by current processes as platforms get poor quality data and consumers feel powerless. 

A key element of ensuring that reviews hosted on a website are genuine is ensuring that reporting processes incentivise consumers to accurately report fake and misleading reviews. The design of the user experience should empower consumers and provide positive feedback, if possible, where a fake or misleading review has been correctly identified. Reporting that has correctly identified a fake or misleading review should also lead to investigations of other reviews on the listing and if these are found to be fake then this further positive feedback should be provided to the consumer. 

These processes could be made more accurate by testing better reporting flows which could, for example, require users to specify the problem with the review (does it mention incentives, does it describe a product that is different to that in the listing etc). Where consumer reporting is inaccurate, there is a greater need for proactive investigation and review by the review hosting platform. Platforms’ consumer reporting and investigations processes should be substantially more effective than Which?’s ability to easily find fake and misleading reviews as an external organisation.

Off Platform Activity

Why it matters:

Fake and misleading review activity does not only happen on the review hosting site itself. Fake review brokers use search and social media sites to advertise their services. Similarly groups on social media platforms are used to recruit people to leave fake or incentivised reviews or to trade fake reviews. Tackling these groups that buy and sell fake reviews can be a useful way of protecting consumers from fake reviews.

What platforms do:

Investigators working for platforms monitor the activity of review brokers and groups that solicit reviews for their platforms. They actively look for them on popular search engines, marketplaces and social media platforms as well as using anomalies in data from their platform to uncover information about fake reviewers. When investigators identify fake review brokers or review groups they can:

  • Send cease and desist letters to the broker or group, 
  • Report the broker or group to the marketplace, search or social media platform that they used to find them, 
  • Monitor the broker or group’s activity on their platform, 
  • Use the information to ban users suspected of being connected to the broker or group; 
  • or sanction the listings that have used the broker or group.

Where it’s not working:

Review hosting platforms have noted that search and social media platforms have taken insufficient action against reported review brokers or groups and are taking little proactive action to find and remove this sort of content. This includes review brokers paying to advertise through these platforms. Similarly, review hosting platforms do not appear to notify other review hosting platforms when they discover a fake review broker selling fake reviews for multiple sites. There appears to be little to no cooperation in preventing fake review brokers from operating on all sites. Although there are signs that this may be changing with Amazon recently calling for information sharing in tackling bad actors.

There are no clear tools for users to report review brokers or review groups either to platforms hosting these groups or brokers, or to the platforms the groups or brokers are targeting.

Recommendations

Review hosting platforms should notify other review hosting platforms and the regulator when they discover a broker selling reviews or a review soliciting group. Review brokers should be treated more like other sophisticated threat actors. Platform tactics and cooperation should more closely mirror the approach taken to counter state actors' attempts to manipulate platforms through ‘coordinated inauthentic behaviour’. This includes intelligence sharing (including sharing metadata) and joint takedowns across platforms including cooperation with regulators. This collaboration should mean that it is not easy to find obvious fake review brokers and groups on or through any major platform. It can also be used to help improve platforms’ moderation systems. Where necessary, the Information Commissioner's Office (ICO) and the Competition and Markets Authority (CMA) should ensure that guidance is sufficiently clear to support this joined up activity. This mirrors the cooperation Which? would like to see in tackling fraud across a variety of industries.

Social media and search platforms should ensure:

  • that fake review activity is against their terms and conditions. 
  • that their search or other discovery functions do not easily recommend review brokers or review soliciting groups. 
  • they promptly remove reported fake review activity when reported by review hosting platforms.

 The regulator should:

  • monitor the number of requests review hosting platforms make of particular platforms, the speed at which those platforms remove fake review activity.
  • take action against platforms that host too much of this content and are too slow to take action after that content is reported. 

We expect to see the CMA given additional powers through the Digital Markets Competition and Consumer Bill to improve enforcement in cases like this.

Manipulated Reviews

Why it matters:

Reviews can give a misleading impression even if each of the reviews involved are genuine. This can happen through processes such as review merging where a genuine review is taken from one product and moved to another unrelated product or from a business ensuring that all the negative reviews are removed from their listing. These reviews have to be treated differently from fake or incentivised reviews as they come from businesses misusing a platform’s systems to change existing reviews rather than seeking to pretend to be a genuine consumer leaving a new review. 

What platforms do:

Platforms consider the possibility that their systems will be abused when they design them. This can include limiting certain functionality so that it can only be done through the assistance of an employee of the platform or including automated checks that monitor when businesses take a certain action too many times or in a way that appears suspicious. Platforms can also be informed of potential review abuse through consumer reports. 

Where it’s not working:

Which?’s investigations have revealed a number of cases in which platform’s features have directly led to review abuse on a platform. For example Which? research showed that eBay’s decision to allow product reviews to be shared across listings led to unsafe products being incorrectly attached to positive reviews. Since this investigation eBay have taken steps to reduce these issues. 

Similarly, Which?’s investigation into review merging on Amazon found that a single listing had reviews for a wide variety of products. These were genuine reviews that were originally correctly approved for a different product but Amazon’s system allowed them to be merged onto a different unrelated product. Automated systems appear to currently be unable to counter this review abuse.

Flagging and reporting processes can be a powerful facilitator for review abuse. These tools can be used to bias overall review scores by removing genuine negative reviews . Businesses have an obvious incentive to seek to remove negative reviews and there is a serious risk that platforms reporting tools will allow them to artificially alter review scores.

For example, businesses on Trustpilot disproportionately flag negative reviews (which is to be expected, as businesses wish to protect themselves from negative reviews). In 2021, 72.5% of reviews submitted to Trustpilot are 5 star reviews and 13.6% are 1 star reviews; 48% of reviews flagged by consumers are 5 star review and 36% are 1 star reviews; whilst 3% of reviews flagged by businesses are 5 star reviews and 86% are 1 star reviews. This shows business flags bias toward removing negative reviews. There’s no data to suggest that businesses don’t follow a similar pattern on other platforms. However, Trustpilot’s systems can make this problem worse. When a review is reported by a business to Trustpilot as potentially being fake then a reviewer can be asked to provide documentary evidence to prove that the transaction took place. If the reviewer does not respond then the complaint is upheld. No such requirement exists when a consumer makes an equivalent report. As a result, if a business were to flag a review as fake, the default if no-one takes any action is for that review to be removed. If that same review were instead flagged by a consumer, then the default if no one takes any action is for that review to stay published. This difference in burden of proof may partially explain why Trustpilot finds that it upholds more reports from businesses (77.2%) than from consumers (16%). If this is the case then this may bias Trustpilot scores to more closely reflect the business’s level of reporting activity rather than the experience it provides to consumers. 

Recommendations

Platforms should conduct thorough risk assessments on the impact of their design on review manipulation. Platforms should have clear processes in place for assessing new features that could lead to abuse of their systems. Platforms should also be expected to conduct a new risk assessment when they become aware of evidence that an existing feature is being misused in a way not anticipated in their original assessment. A failure to conduct a thorough risk assessment should be taken by regulators as a key sign that the platform has insufficient processes to protect consumers.

Reporting processes which prioritise the business over the consumer are a key part of review abuse and risk harming the consumer by pushing them toward inferior products or services. Businesses should not be given access to special reporting tools which allow them to unduly bias the reviews that appear. Platforms should actively monitor and investigate businesses that are abusing any tools to remove negative reviews. 

There is a trade off between the problem of fake reviews and manipulated reviews in deciding whether the burden of proof of establishing if a review is genuine, lies with the reviewer or the business. Although it may increase the number of fake reviews on a platform, platforms should not place the burden of proof solely on the reviewer as this strongly incentivises review abuse by business over reporting negative reviews on their own listing or positive reviews on a competitors listing.

Redress

Why it matters:

When a fake or misleading review is discovered, action needs to be taken to prevent further harm. Redress mechanisms can act as a deterrent to prevent businesses from soliciting further fake and misleading reviews, as well as more directly dealing with issues. 

What platforms do:

Platforms have a range of responses when they discover a fake or misleading review on a listing. As well as removing the review, this usually includes informing the business that they have discovered this activity and penalising the business by reducing its ranking in search results or other recommendation systems. Alongside this, platforms will also take action against the account that left the review to prevent it from posting further reviews. Different types of platforms respond differently where businesses are found to have multiple fake or misleading reviews on their account. Platforms that allow businesses to sell directly to consumers (e.g. Amazon) remove businesses that are suspected of fake or misleading review activity. Other platforms which only host reviews (e.g. Trustpilot) will not necessarily delete an account if they encounter fake or misleading review activity, they may instead choose to place a label on the page to inform consumers of the business’s bad behaviour and to restrict that business’s access to their site. This is more transparent to users that fake or misleading review activity has occurred. Platforms, such as Amazon and Google, have also recently taken legal action against those leaving fake reviews. 

Where it’s not working:

Consumers can be unaware that their purchase has been informed by fake reviews even after a platform has detected these reviews and determined that the business is at fault. Under the Consumer Protection Regulations (CPRs) and reformulated in the new DMCC Bill, consumers may have the right to redress where businesses have engaged in a ‘misleading action’ or an ‘aggressive practice’ (as defined in the CPRs). The consumer may have a right to unwind the contract within 90 days, a right to a discount of up to 100% of the cost of the product if the prohibited practice is very serious and the right to damages where applicable. Consumers are currently unable to exercise these potential rights if they are not aware of fake review activity and do not have evidence to show that this has taken place. For consumers to successfully use these rights they must show that the reviews that influenced them were fake. It is difficult to do that without access to evidence held by platforms. 

Recommendations

Platforms should, as a priority, prevent harm from fake reviews where they are discovered including removing the ability for the offending businesses to sell products, removing listings and adding prominent labels that ensure consumers are aware of the business’s malign activity. These should be sufficiently strong to deter businesses from continuing this harmful practice.

Platforms could also deter businesses from using fake reviews by supporting consumers to exercise their consumer rights. Where the platform has data linking consumers to purchases from that business with substantial fake review activity it could contact those consumers to inform them that they may have been misled and may have rights of redress under the CPRs. Platforms could encourage consumers to seek refunds through their systems and if consumers believe they have an actionable case, supply them with evidence of fake review activity if they seek to exercise their consumer rights in court. If a platform is displaying a warning this warning could also inform consumers of their possible right to redress.

Where platforms contact consumers to inform them about fake reviews activity in relation to a product they have purchased, they could also remind consumers of their usual rights to a return and refund under other consumer legislation and the platform’s own policies, as in some cases the relevant time limitations may not have expired. 

Auditing, Continuous Improvement and Transparency Reporting

Why it matters:

Content moderation and platform design will never be perfect at catching and removing fake reviews. In order for it to improve there must be robust systems to measure how successful it is and look for possible ways to increase effectiveness. This can include auditing to assess how systems are working, continuous improvement processes to upgrade protections and transparency reporting to support external actors in holding platforms accountable. 

What platforms do:

Platforms have processes for reviewing and improving their systems for finding and removing fake reviews. Some described weekly review meetings, others talked about regular audits from external teams or processes for continuous revision and improvement. This is particularly true of the automated scanning used to detect fake reviews but guidance for human moderators and corporate processes are also revised.

The level of transparency reporting differs widely between platforms. Some platforms such as TripAdvisor and Trustpilot produce detailed transparency reports that include a high level description of their processes for removing fake reviews as well as statistics on the number of fake reviews removed as a result of different parts of their processes. Other platforms provide only vague public descriptions of their processes and occasionally offer numbers on removals without context or explanation. The Digital Trust and Safety Partnership includes publishing periodic transparency reports with data on enforcement practices as a part of best practice in Trust and Safety

Where it’s not working:

It’s not clear how effective any platform’s measures are at preventing fake reviews from being posted on a platform, nor any good metrics to measure whether platforms are getting better or worse at taking action on this overtime. It's also hard to disentangle changes in platform activity and changes in fake review activity. This makes it hard to evaluate their processes for auditing and improving.

Without proper transparency reporting, consumers have no reason to trust that platforms are taking adequate measures to ensure reviews on their platforms are genuine. Many platforms fail to properly explain what they do to prevent fake reviews and the effect it has. Civil society organisations do not have the information they need in order to properly inform consumers or hold individual platforms accountable. Platforms have justified this lack of public information on the basis that providing it would help fake review brokers. No evidence has been provided for this assertion and platforms that are transparent have seen no evidence of negative effects from their transparency. 

There are no standardised measures for incidence of fake reviews on a platform nor a common definition for identifying a fake review. It is unclear how far outside the norm review behaviour must be for each platform to classify that review as fake or how suspicious content should appear to be before it is classified as a fake review. These will also change over time as technology develops and as fake review activity changes. 

Recommendations

Platforms should be expected to regularly audit their systems to look for ways to better prevent reviews from being posted, highlight potential risks in their design and problems with consumer reporting systems. This should feed into a culture of continuous improvement with platforms. Processes can be further improved by working in collaboration with other platforms to share data to better understand the brokers seeking to manipulate reviews.

Due to the lack of standardised measures of incidence, for the foreseeable future determinations of whether platforms are adequately protecting consumers will have to focus on processes rather than high level results.

Platforms should be expected to publish regular transparency reports outlining a high level description of the measures they take to tackle fake reviews and provide statistics for the numbers of reviews removed by each part of their process. For example TripAdvisor’s transparency report includes the number of reviews removed by its automated checks, the number referred to human moderators and the number that those moderators approve or otherwise. Transparency reports should provide context on reviews overall on the platform including average review rating and the proportion of reviews with different levels of verification. These reports should compare removed reviews against these overall levels. For example Trustpilot’s transparency report gives both the distribution of star ratings across the platform and for different types of flagged reviews.

These transparency reports should be accompanied by public facing action to raise awareness amongst businesses and consumers, on the fact that fake reviews breach their terms and conditions alongside highlighting that these are likely breaches of CPRs and that businesses that engage in them could face legal action including, potentially, consumers seeking civil redress. 

Large platforms should also begin developing and reporting measures of the incidence of fake reviews. Meta reports on prevalence of other types of content against its policy on Facebook by collecting a sample of views of content on the platform and manually reviewing that sample to determine what percentage Meta’s breaks policies [1]. This would be in keeping with the Online Safety Bill where Ofcom can require large search and social media platforms to produce transparency reports including the incidence of illegal content.

Platforms should be expected to be continuously improving their systems to detect fake review activity through pre-publication checks, post-publication checks and off-platform activity. The success of this continuous improvement should be reflected in the results seen through their transparency reporting. 

Once an ecosystem of reporting incidence and the effectiveness of systems, the regulator should begin publishing data on trends across the industry to help improve trust in these systems and establish where improvement is needed.

Policy Recommendations

In order to ensure that review hosting platforms take steps like those outlined in this paper, the law needs to be clear on these points and properly enforced. The Digital Markets, Competition and Consumer Bill (DMCC) will give the CMA more powers to fine firms that are breaching consumer law and to give the Government the power to add practices to the list of ‘automatically unfair practices’ (currently contained in the CPRs). The Government proposes that these practices should include commissioning or incentivising any person to write and/or submit a fake consumer review of goods or services, offering or advertising to submit, commission or facilitate fake reviews and hosting consumer reviews without taking reasonable and proportionate steps to check they are genuine. 

We support the intention of adding these three practices to the banned list. However, the structure of the Bill means that if these practices are added to the banned list through secondary legislation later as planned (rather than being included in the Bill now) they would not be eligible for criminal enforcement. This risks letting rogue traders off the hook.

The Government should amend the Digital Markets, Competition and Consumer Bill to include the following practices in the list of automatically unfair practices: 

Commissioning, incentivising or authorising the writing or submission of false consumer reviews or endorsements, in order to promote products.

Offering or advertising to submit, commission or facilitate false consumer reviews or endorsements.

Displaying consumer reviews of products on an online interface—

a. without taking reasonable and proportionate steps to ensure that such reviews are submitted by consumers who have actually used or purchased the products in question;
b. where any consumers who provided reviews were incentivised to describe certain products in a particular way, without taking reasonable and proportionate steps to ensure this is not the case; or
c. in a way that deceives or manipulates consumers, or where a practice has been undertaken in relation to reviews that otherwise materially distorts or impairs the ability of consumers to make free and informed decisions, without taking reasonable and proportionate steps to ensure this is not the case.

Alongside these changes to bring legal clarity on the illegality of fake reviews and review hosting platforms’ legal obligations, the DMCC Bill gives the CMA additional powers to make it easier to fine companies that breach consumer law. The CMA currently has a number of active investigations into fake reviews that have been ongoing for a number of years.

The CMA should continue to investigate and take enforcement action against platforms that are taking inadequate action to protect consumers from fake reviews.

Footnote

[1] YouTube has a similar concept of Violative View Rate, and Twitter provides a percentage of impressions that were for tweets that violated their policies  

About

Which? is the UK’s consumer champion, here to make life simpler, fairer and safer for everyone. Our research gets to the heart of consumer issues, our advice is impartial, and our rigorous product tests lead to expert recommendations. We’re the independent consumer voice that works with politicians and lawmakers, investigates, holds businesses to account and makes change happen. As an organisation we’re not for profit and all for making consumers more powerful.