Policy research paper

The Consumer Voice: Automated Decision Making and Cookie Consents proposed by “Data: A new direction"

Which? research finds that consumers have strong views about some of the proposed changes to the UK’s data protection regime as set out in the ‘Data: A new direction’ consultation launched by the Department for Digital, Culture, Media and Sport
17 min read

Which? undertook qualitative research using a deliberative approach to understand consumer views on some of the proposed changes to UK data protection law outlined in the consultation. The research focused on two areas: 

  1. The use of AI to make solely automated decisions about us. 
  2. The use of cookies to gather data about consumers, and how consent for this is obtained.

Participants appreciated the use of AI when it helped them receive tailored suggestions, such as TV programme recommendations, but were more uncomfortable with automated decisions being made about them solely using AI, such as approval for financial products. After discussing the proposal to remove Article 22 of the UK GDPR, which gives individuals a right not to be subject to a decision based solely on automated processing, participants felt strongly that this should be retained. Participants could see no consumer benefit to removing the right to challenge.

Participants also discussed the use of cookies, and they felt it was important they had control over how their data is used. Almost all thought third party cookies that track and profile them should only be used with explicit consent. Participants also reflected on the range of alternative mechanisms for obtaining cookie consent set out in the consultation. The favoured approach overall was setting cookie preferences in your browser/device, with the cookie banner system ranking second.

1. Introduction

1.1 Background

In September 2021, The Department for Digital, Culture, Media and Sport (DCMS) launched its consultation on reforms to the UK’s data protection regime (UK GDPR). The consultation is wide ranging and contains many proposals that would have a direct impact on consumers. We decided it was important to talk directly to consumers to understand their view of some of these proposals, as well as gather their thoughts on the data landscape more widely. We designed a consumer engagement exercise which comprised a deliberative online community conducted over the course of a week (18–23 October 2021). More details of the structure and format of the consumer engagement are given below.

1.2 Method

We achieved high quality engagement with 22 consumers – equivalent to the number that would be expected if we were to conduct two focus groups – by working with them over 6 days using a deliberative but asynchronous ‘online’ research method hosted on our platform for conducting online qualitative research, called Recollective. In this virtual space, participants were able to examine stimulus materials, learn about the topic areas and proposed reforms, and share their views and engage with others’ opinions. 

Given the consultation is so wide ranging we focused our consumer engagement on two areas:  

  • Data tracking, and decision making processes using Artificial Intelligence (AI).
  • The proposals to change cookie consent mechanisms, and the right to challenge AI.

We covered these topics through a series of tasks participants responded to across the 6 days of  the community, structured as follows:

Day 1: Introduction. Participant welcome and introduction to the community, gather baseline attitudes to data collection.

Day 2: Cookies. Introductory video explaining cookies, extract from an interview with Lou Montulli (the inventor of the cookie) explaining his thoughts on the pros and cons of cookies, cookie examples, poll asking what types of cookies should count as ‘essential’.

Day 3: Consent. Poll asking comfort levels with different types of cookies being collected without consent, participant response to real life cookie banners, poll to rank consumer response to the government’s 4 proposed options for cookie consent.

Day 4: AI: summary of AI and how it works, examples of AI in real life (entertainment content recommendations and insurance premiums), poll to understand comfort levels with AI decision making in various scenarios, summary of fully automated decision making.

Day 5: Right to challenge AI: views on consumer safeguards in AI, response to the proposal to remove the right to challenge solely automated decisions made about individuals, poll to understand comfort levels with AI decision making in various scenarios with (1) a right to challenge and  (2) without a right to challenge.

Day 6: Overall reflections: consumer priorities for the future of data protection, any questions they have for the government, reflecting what they have learnt and sharing what surprised them.

Light touch moderation was conducted by policy researchers at Which? to clarify important points rather than to lead the discussion. The community itself was made available for team members within DCMS to observe and see first hand what consumers were saying about the proposals being put forward in the Data: A new direction consultation

1.3 Sample

We recruited study participants via the Market Research Society and the Association for Qualitative Research accredited specialist recruitment agency, Roots Research, to provide a balanced cross section of the online population, by setting quotas for key demographics and internet behaviour/usage. All research participants were members of the general public and not Which? members or supporters. 

All demographic and usage quotas were informed by profile data from a general public survey. Minimum quotas were set for gender, ethnicity, income and internet usage. The table below illustrates the make-up of the community participants.

Participant information

Gender
Male12
Female11
Total22
Household Income
<£21k6
£21k - £41k7
£41k+9

1.4 A journey of discovery

Before we summarise the key findings from the community in sections 2 and 3, it is important to note that consumers were incredibly engaged with the subject matter and grasped all the concepts we presented to them extremely well. This reinforces our belief that consumers are fully able to understand complex issues such as automated decision making when they are presented with information in accessible, digestible and relevant ways. We found that many of the 22 participants went on a journey of discovery during the 6 day exercise and were able to articulate strong and reasoned views once they had engaged with the relevant stimulus material.

For instance, in terms of cookies, many were unaware of the scale and extent of cookie tracking.

“I am quite surprised that there are so many types of cookies. I didn’t realise that there were so many and that they had different functions. I was only aware of the cookies that monitor the adverts that you have clicked on.”

“It was good to be educated on the specific cookies that track my internet use. It’s good to know there is a way to track them and turn them off if I wish.”

And by the end of the process, some participants described how participation in the community had led them to change their behaviour online. 

“[Participation] has increased my curiosity and understanding. I have been reading the pop ups on cookie consent on websites I have visited recently and declining more often than I was previously.”

This gives us further evidence that consumers are very capable of contributing to and taking part in debates that shape future policy and regulation. Not only this, but they see it as an important lens through which to test policy proposals, as encapsulated by this question posed to the government by one participant on the closing day of the community:

“What proposals are you considering and why? What consultation/research processes have been/will be conducted?  Who would be involved in those processes? What is their expertise/experience?  Will the public be involved in the consultations? If not, why not?”

2. Views on AI and automated decisions

2.1 Whilst participants could see some benefits of AI, they were cautious about potential risks

Before we explored what participants felt about fully automated decisions made by AI, we examined what consumers already understand in this area and invited them to comment on the benefits and risks of AI.

The research revealed that consumers view AI as mainly benefiting companies (such as in efficiency savings and increased productivity) and consumers who have straightforward needs or circumstances for whom AI-determined personalisation was viewed as fairly low risk. 

Many participants did not mind the use of AI in either suggesting options when participants  were deciding what programmes to watch (considered by some to be helpful and efficient)  or in supporting decisions (the example they responded referenced insurance premiums). However, it was not uncommon for participants to voice concerns around potential bias and inaccurate assumptions being made based on a person’s past behaviour and internet browsing. They were particularly worried about this risk in scenarios where AI is used to make a decision about an outcome that could greatly impact the consumer. In short, participants were comfortable with AI making tailored suggestions and supporting decisions, but less comfortable with it making bigger life-altering decisions such as mortgage decisions, health insurance and so on. 

“I would not feel comfortable using AI to make decisions for and about me, but I think it is a useful tool to provide information that supports the decision making process.”

Furthermore, participants were concerned about what data AI was analysing and doubted its whether it could be trusted to be fair. One participant wrote at length about their concerns in this area:

“Are they identifying people and helping them to save money with their best interests at heart? Or are they hiding behind the anonymity provided by technology in order to exploit these people for profit?... If we are aligned to speaking truthfully and acting and communicating honestly I believe we are headed for great things. If not, we decline. I feel that both are going on and it is sometimes hard to know which organisation is truthful and which organisations  again exploit whatever, to try to appeal to more people, or get more money. ”

Participants also raised concerns over the risk of data breaches and inadequate privacy provisions, which points to consumers not having enough information and reassurance about the security and governance of AI systems. 

2.2 Concern over automated decisions solely made by AI

Participants felt strongly that decisions solely made by AI should have the option for human review. An important factor in this was the concern that decisions made autonomously by AI would lack the nuance and social context of those made by humans. This was particularly raised as a concern when AI was making decisions that had the potential to have large ramifications on people’s lives, where it was important to consider the complexities of life. We provided participants with a case study of a real life mortgage application made using AI. It elicited many worried responses such the one below:

“I also think everybody’s circumstances are different, one size doesn’t fit all! There are sometimes things that happen in people’s lives which you need to be able  to have the opportunity to explain, when it comes to the company making a decision  about something. That’s when it is good to have the human involvement.”

Consumers were aware that AI is still developing and cited examples of existing AI algorithms that have problems, such as issues with the use of facial recognition. Given it can often reflect and amplify human biases, participants felt it was important that solely automated decisions be subject to human review to ensure errors and bias have not crept in.

Lack of transparency of AI systems was a critical issue for our community, and many felt there were already elements of AI that were very opaque. The concern was multifaceted: some felt it should be a moral obligation to disclose when it is being used, in order to give consumers a fair choice. Others wanted the use of AI to be disclosed so they could opt out of its use in decisions made about them, or to be given the choice of which data is being used in autonomous decisions. 

“Consumers should be made aware at the outset that AI is been used in a clear and concise way,  in language that the average consumer is aware of what is been communicated. As a consumer I want to have the choice to be able to make an informed choice  and be given the option to opt in or out of AI enabled services or products.”

Lastly, some wanted to know more about how the process worked, with reassurance that rigorous analysis of each system was conducted ahead of implementation. Some spontaneously called for AI and algorithms to be more regulated than at present, given the potential for negative impacts on people. 

“This AI functionality needs to be regulated so companies can only pull data that is relevant for the purpose it is trying to fulfil and only when the customer truly understands the way their data is being handled and shared to protect their privacy rights.”

 2.3 Consumers feel very strongly that the right to challenge decisions using AI should not be revoked

Removing the right to challenge was seen as unethical. Participants strongly believed that the right to challenge should exist and that human intervention is necessary. The removal of the right to challenge was viewed as of no benefit to consumers, and unfair. 

“I am horrified by the proposal to remove the right to view and challenge fully automated  AI based decisions, we are human beings and should be treated as such. AI may be helpful  in business but should not be allowed to make final irreversible decisions.”

Participants felt the ability to challenge decisions made using AI was a right not a privilege, and its removal would be “dehumanising”. As we saw above, participants were already aware of flaws in some automated decisions, and so taking away the right to challenge would give too much power to fallible systems that could have unintended consequences. Indeed, some thought that unchallenged solely automated decisions would further skew the imbalance of power between consumers and businesses and give companies more ways to rescind responsibility if something went wrong.

“Losing this right would mean that no-one would be held accountable for anything,  companies can just blame mistakes on ‘system issues’.”

We asked participants to complete a poll, which asked participants their comfort levels with  7 different scenarios in which AI could be used to make decisions about them. The results showed that comfort levels in AI making solely automated decisions is lower for financial decisions than other scenarios. Comfort levels dipped further when participants were asked to consider the 7 item list a second time, assuming there would be no right to challenge (see Figure 1 below).  

Figure 1: Which of these decisions would you be comfortable with AI making,  if the right to challenge was not available? 

Participants could see no consumer benefit to removing the right to challenge, but anticipated a range of risks such as discrimination or mistreatment, financial impacts, the loss of privacy and potentially the loss of freedom and justice:

“I am appalled they are looking to remove this and have very strong objections to this. I think this will end up discriminating against people and overall is incredibly negative.”

“The decision to remove our rights as consumers to view and challenge fully automated AI decisions that can have a devastating effect on our lives be it financial, health etc is absurd, I was under the impression I am living in the UK not China. I believe this will have a huge financial impact on consumers getting Insurance, mortgages, loan, health care basically living a productive life, this can only lead to crime, poverty and the breakdown of family life.”

“I think it would be very bad if we lost that right I would feel very uneasy about it all we all have a right to our privacy and surely we should be able to choose how much info we would allow to be taken.”

3. Cookie consents

3.1 It is important for consumers to have control over how their data is used

It is very clear that consumers want to continue to be able to actively choose which cookies are consented to when they use the internet and what data is collected about them. 

Whilst we heard a range of views about engagement with advertising and how it can be useful, when asked what data they considered to be ‘strictly necessary’ to be collected about them,  not one of the 22 respondents felt that tracking by a third party is essential.

As shown in Figure 2, the vast majority of participants (21 out of 22) said they were ‘not comfortable’ with the idea of third party cookies collecting data without explicit consent. Indeed, 15 of the 22 participants said they were ‘not at all comfortable’. Not only do they not view this activity as essential, they are actively uncomfortable with this occurring.

Figure 2: How comfortable or uncomfortable do you feel about companies collecting the following types of data about you through cookies without your explicit consent?

“I don’t believe it is ethical to collect data without consent and that the government should provide protection from this.”

However, in the same poll, 16 of the 22 participants said they were very/fairly comfortable with their behaviour on a site being used for first party purposes without giving explicit consent.

“I appreciate that some cookies make my life easier when I am returning to a site I haven’t visited before, but not so happy that my browsing history and advertisements  are used for targeted advertising.”

“I have no problem with receiving a better service for [the] use of providing my own information.” 

“I feel a little uncomfortable knowing that many companies are following me. I only think it is acceptable for the store or website I am visiting to track my activity  there and once I leave it should stop.”

Participants felt that whilst first party and functional cookies can feel intrusive, they also understood that they can be a necessary element of a good consumer experience online. The collection of data about the technology they are using (device, internet provider, time of day) used only by the first party website without explicit consent – was seen by 15 of the 22 participants as something they would be ‘very comfortable’ or ‘fairly comfortable’ with.

3.2 Setting cookie preferences in your browser/device was the most popular mechanism, followed by banners

When we asked our participants to tell us which of the 5 cookie consent mechanisms  (as presented by the government in the Data: A new direction strategy) they felt most comfortable with, no-one selected ‘analytical cookies/performance cookies to be defined as ‘essential’ and collected without explicit consent’ as their preferred option. Additionally,  no-one selected ‘all cookies to be automatically dropped onto a device without explicit consent’ as their favoured mechanism either. This is further evidence that consumers want to be informed and asked for consent whenever cookies are used to collect data about them.


Users must set up their browser, app or device to give/deny consent for analytic +3rd party cookiesCookie banners on individual websites/apps ask for consent for analytic or performance cookiesConsumers choose a “trusted third party” to manage data about them and give their data to companiesAll cookies would be automatically added to our devices without your explicit consentAnalytic/performance cookies become ‘essential’ and collected without your explicit consent
How many selected as 1st choice
138100
How many selected as 2nd choice
810301
How many selected as 3rd choice
041206
How many selected as 4th choice
105313
How many selected as 5th choice
001192

Out of the 5 options we presented, over half (13 of the 22) participants chose the option of setting cookie preferences in their browser/device as their preferred consent mechanism. 

“I believe the most efficient way to deal with cookies preferences is to set these up in your browser once and apply the same settings to every new site you open in that browser.”

The benefits of setting consent within a browser/app/device was viewed as one of convenience and ease rather than one that gives nuanced choice. The act of setting consents in a browser was understood to give internet users ‘more control’, defined in a very specific way – it was perceived as providing consumers with the ability to express their preferences across all sites at the same time, as opposed to repeatedly deal with banners which are not within their control.

“Having the choice to set my device for all applications and websites would allow me  to feel more in control of what data is shared.” 

Some questioned how accessible this option would be to ensure it was inclusive and easy to use  for everyone. One participant wondered if users might forget or need reminding of the settings whilst another worried about what the default settings in browsers would be. Some felt strongly that consent should be both explicit and nuanced, which individual cookie banners allow for,  so there were some that preferred this option. In total, 8 out of 22 participants chose cookie banners on individual sites as their most preferred consent mechanism; this challenges the view that consumers dislike or are fatigued by cookie banners. Consumers – when they understand the purpose of banners and the implications of the various options – find them useful processes for control and choice:

“I think banners are preferable [even if] they’re irritating and unsightly!”

The problem expressed with cookie banners is not the concept or inconvenience of the banner rather that there is no standardised process for how cookies are explained and how consent is presented. Many felt cookie banners and explanations about what cookies are and what they do were inconsistent in design and information provision, and that the worst examples use manipulative tactics and draw on dark patterns to encourage more wide ranging consents.

“I noticed that you get an option to manage cookies on [name of site]. You have the option  to disable all cookies except “essential” cookies. It also doesn’t give enough explanation  as to what data the “essential” cookie is collecting.”

“My options were “I’m happy with all cookies, let’s go” or “I want to manage cookies”. On selecting the manage button I was given a similar, but smaller, list than before. This time the choice of YES or NO was not as clear as before. There was a box around NO and the YES was shaded – I could assume this meant that No had been selected but I’ve been caught out before with such ambiguity.”

The option of giving consent through a ‘trusted third party’ was not well understood and it would be useful to be able to dedicate more time to engage consumers on this option, which garnered  1 vote as the preferred consent mechanism. Consumers tried to engage with the concept but could only give it superficial consideration without access to digestible and relevant information.  

“I’m not sure how I am able to trust a company that I have no idea how they plan  to use my data, what data is stored and for how long.”

“I don’t know who ‘a trusted third party’ would be. If this is any company that makes a profit then I’m not sure that I could trust them to keep my data safe.”

Giving the consumer the ability to consent and choose which cookies are used remains preferable to options where explicit consent is withdrawn and the concept of what is ‘strictly necessary’ is extended.

It is clear from our research that consumers want an option that is easy to use, clearly signposted and is easy to understand. From a policy position and consumer engagement angle Which? would like to see the participants’ preferred option of giving consent through a browser, application or device be developed further. We would be happy to engage with development and testing of potential models with consumers to ensure that the design, user experience, language, signposting and ease of use fulfil consumers’ needs. Some participants even made this suggestion themselves.

“Considering the scheme to protect consumer data collected through cookies, what is the  worst case scenario for an average consumer, who has little understanding of cookies?”

“Will proposals go through an evaluation period?”

4. Consumer questions and comments about the reforms

Participants’ final questions and reflections about the proposed reforms on the last day of the community showed they fully grasped the issues at hand. They posed a number of very insightful questions which reflect the very ones we would like to see investigated seriously. These included:

“My question to the Government would be simple. Are you trying to support and protect people online?”

“Are you confident you can put enough safeguards in place to protect consumers and ensure they are not taken advantage of or are not going to be negatively affected by the use of AI?”

“What is the criteria for a cookie to be necessary and why?”

Download our full report:

About

Which? is the UK’s consumer champion, here to make life simpler, fairer and safer for everyone. Our research gets to the heart of consumer issues, our advice is impartial, and our rigorous product tests lead to expert recommendations. We’re the independent consumer voice that works with politicians and lawmakers, investigates, holds businesses to account and makes change happen. As an organisation we’re not for profit and all for making consumers more powerful.