Benefits and concerns

These findings are from the latest survey (2025). Explore previous findings: 2023

We asked people to indicate the extent to which they felt each technology in our survey would be beneficial, and separately the extent to which they were concerned by each technology. Overall, we found that the public holds nuanced views about AI, seeing both the benefits and risks associated with different applications. 

Perceptions of benefit for AI are high for some applications in diagnostic health and justice

The UK public has high expectations for some AI technologies. In particular, the majority of the public perceive facial recognition in policing (91%) and AI-driven risk assessment for cancer (86%) to be beneficial uses of AI. A majority of the public (63%) also have positive views about LLMs (e.g. ChatGPT).

Expectations of positive impact are lower for other uses of AI, as shown in Figure 3. While optimism around the use of AI in diagnostics for cancer is high, the same is not true for the application of AI in other areas of healthcare, with only 36% of the public perceiving mental health chatbots to be beneficial, and 55% perceiving robotic care assistants to be beneficial. As with our previous survey wave, the only public sector application of AI in our current survey – the use of AI for assessing eligibility for welfare benefits – was also viewed less positively than others, with less than half (47%) of the public perceiving it as beneficial. Similarly, only 45% perceive driverless cars to be beneficial. 

People on lower incomes and those with fewer digital skills are less likely than the general public to perceive nearly all of the AI technologies we asked about as beneficial

Figure 3: The extent to which each AI use is perceived as beneficial

(Due to rounding, percentages may not total 100%)

To what extent do you think that the use of this technology will be beneficial?’ 

Facial recognition for policing
Very: 49%
Facial recognition for policing
Fairly: 42%
Facial recognition for policing
Don’t know/​prefer not to say: 3%
Facial recognition for policing
Not very: 4%
Facial recognition for policing
Not at all: 2%
Assessing risk of cancer
Very: 52%
Assessing risk of cancer
Fairly: 34%
Assessing risk of cancer
Don’t know/​prefer not to say: 7%
Assessing risk of cancer
Not very: 5%
Assessing risk of cancer
Not at all: 2%
Large language models (e.g., ChatGPT)
Very: 17%
Large language models (e.g., ChatGPT)
Fairly: 46%
Large language models (e.g., ChatGPT)
Don’t know/​prefer not to say: 18%
Large language models (e.g., ChatGPT)
Not very: 13%
Large language models (e.g., ChatGPT)
Not at all: 6%
Assessing loan repayment risk
Very: 11%
Assessing loan repayment risk
Fairly: 46%
Assessing loan repayment risk
Don’t know/​prefer not to say: 13%
Assessing loan repayment risk
Not very: 23%
Assessing loan repayment risk
Not at all: 7%
Robotic care assistants
Very: 13%
Robotic care assistants
Fairly: 42%
Robotic care assistants
Don’t know/​prefer not to say: 15%
Robotic care assistants
Not very: 19%
Robotic care assistants
Not at all: 10%
Assessing welfare eligibility
Very: 9%
Assessing welfare eligibility
Fairly: 38%
Assessing welfare eligibility
Don’t know/​prefer not to say: 17%
Assessing welfare eligibility
Not very: 25%
Assessing welfare eligibility
Not at all: 11%
Driverless cars
Very: 14%
Driverless cars
Fairly: 31%
Driverless cars
Don’t know/​prefer not to say: 7%
Driverless cars
Not very: 27%
Driverless cars
Not at all: 20%
Mental health chatbots
Very: 5%
Mental health chatbots
Fairly: 31%
Mental health chatbots
Don’t know/​prefer not to say: 19%
Mental health chatbots
Not very: 28%
Mental health chatbots
Not at all: 17%
Technology Very Fairly Don’t know/​prefer not to say Not very Not at all
Facial recognition for policing 49% 42% 3% 4% 2%
Assessing risk of cancer 52% 34% 7% 5% 2%
Large language models (e.g., ChatGPT) 17% 46% 18% 13% 6%
Assessing loan repayment risk 11% 46% 13% 23% 7%
Robotic care assistants 13% 42% 15% 19% 10%
Assessing welfare eligibility 9% 38% 17% 25% 11%
Driverless cars 14% 31% 7% 27% 20%
Mental health chatbots 5% 31% 19% 28% 17%

Concerns around AI are substantial, even when expectations of positive impact are high

Overall, the UK public are most concerned about the application of AI in driverless cars (75%), mental health chatbots (63%) and assessing welfare eligibility (59%). Figure 4 shows concern levels for each AI technology. Even for technologies where expected positive impact is high, concern levels are also substantial. For example, nearly two-fifths (39%) of the public are concerned by the use of facial recognition in policing. 

57%

Over half of all Black (57%) and Asian (52%) people reported being fairly or very concerned about the use of facial recognition in policing

Figure 4: The extent to which each AI use is perceived as concerning

(Due to rounding, percentages may not total 100%)

To what extent are you concerned about this use of this technology?’ 

Driverless cars
Very: 33%
Driverless cars
Fairly: 42%
Driverless cars
Don’t know/​prefer not to say: 3%
Driverless cars
Not very: 16%
Driverless cars
Not at all: 6%
Mental health chatbots
Very: 22%
Mental health chatbots
Fairly: 41%
Mental health chatbots
Don’t know/​prefer not to say: 9%
Mental health chatbots
Not very: 21%
Mental health chatbots
Not at all: 7%
Assessing welfare eligibility
Very: 16%
Assessing welfare eligibility
Fairly: 43%
Assessing welfare eligibility
Don’t know/​prefer not to say: 7%
Assessing welfare eligibility
Not very: 26%
Assessing welfare eligibility
Not at all: 8%
Robotic care assistants
Very: 19%
Robotic care assistants
Fairly: 39%
Robotic care assistants
Don’t know/​prefer not to say: 8%
Robotic care assistants
Not very: 27%
Robotic care assistants
Not at all: 6%
Assessing loan repayment risk
Very: 9%
Assessing loan repayment risk
Fairly: 41%
Assessing loan repayment risk
Don’t know/​prefer not to say: 7%
Assessing loan repayment risk
Not very: 34%
Assessing loan repayment risk
Not at all: 9%
Large language models (e.g., ChatGPT)
Very: 11%
Large language models (e.g., ChatGPT)
Fairly: 36%
Large language models (e.g., ChatGPT)
Don’t know/​prefer not to say: 13%
Large language models (e.g., ChatGPT)
Not very: 31%
Large language models (e.g., ChatGPT)
Not at all: 9%
Facial recognition for policing
Very: 7%
Facial recognition for policing
Fairly: 32%
Facial recognition for policing
Don’t know/​prefer not to say: 2%
Facial recognition for policing
Not very: 40%
Facial recognition for policing
Not at all: 19%
Assessing risk of cancer
Very: 4%
Assessing risk of cancer
Fairly: 26%
Assessing risk of cancer
Don’t know/​prefer not to say: 5%
Assessing risk of cancer
Not very: 43%
Assessing risk of cancer
Not at all: 21%
Technology Very Fairly Don’t know/​prefer not to say Not very Not at all
Driverless cars 33% 42% 3% 16% 6%
Mental health chatbots 22% 41% 9% 21% 7%
Assessing welfare eligibility 16% 43% 7% 26% 8%
Robotic care assistants 19% 39% 8% 27% 6%
Assessing loan repayment risk 9% 41% 7% 34% 9%
Large language models (e.g., ChatGPT) 11% 36% 13% 31% 9%
Facial recognition for policing 7% 32% 2% 40% 19%
Assessing risk of cancer 4% 26% 5% 43% 21%

Different demographic groups have distinct attitudes to applications of AI

We observed key demographic differences in the perceived benefits of each AI technology. Black/​Black British and Asian/​Asian British people are more likely than the national average to view applications of robotics (driverless cars and robotic care assistants), LLMs and mental health chatbots as beneficial. The demographic difference in perceived benefits is most notable for general-purpose LLMs, where 80% of each minoritised ethnic group perceives them as beneficial, compared to 63% of the general population. 

People on lower incomes and those with fewer digital skills are less likely than the general public to perceive nearly all of the AI technologies we asked about as beneficial. For instance, only 48% of people on lower incomes felt robotic care assistants could be beneficial compared to 55% of the general population. Among those with fewer digital skills, 41% felt LLMs could be beneficial compared to 63% of the general population.

Figure 5 highlights demographic variances in overall levels of perceived benefit for each technology.

Figure 5: The extent to which each AI use is perceived as beneficial: demographic analysis

(Due to rounding, percentages may not total 100%)

To what extent do you think that the use of this technology will be beneficial?’ 

Asian or Asian British
Facial recognition for policing
Very: 40%
Facial recognition for policing
Fairly: 49%
Facial recognition for policing
Don’t know/​prefer not to say: 4%
Facial recognition for policing
Not very: 6%
Facial recognition for policing
Not at all: 1%
Assessing risk of cancer
Very: 47%
Assessing risk of cancer
Fairly: 38%
Assessing risk of cancer
Don’t know/​prefer not to say: 4%
Assessing risk of cancer
Not very: 6%
Assessing risk of cancer
Not at all: 5%
Large language models (e.g., ChatGPT)
Very: 30%
Large language models (e.g., ChatGPT)
Fairly: 50%
Large language models (e.g., ChatGPT)
Don’t know/​prefer not to say: 10%
Large language models (e.g., ChatGPT)
Not very: 8%
Large language models (e.g., ChatGPT)
Not at all: 2%
Assessing loan repayment risk
Very: 15%
Assessing loan repayment risk
Fairly: 42%
Assessing loan repayment risk
Don’t know/​prefer not to say: 11%
Assessing loan repayment risk
Not very: 26%
Assessing loan repayment risk
Not at all: 6%
Robotic care assistants
Very: 15%
Robotic care assistants
Fairly: 52%
Robotic care assistants
Don’t know/​prefer not to say: 16%
Robotic care assistants
Not very: 11%
Robotic care assistants
Not at all: 6%
Assessing welfare eligibility
Very: 11%
Assessing welfare eligibility
Fairly: 40%
Assessing welfare eligibility
Don’t know/​prefer not to say: 18%
Assessing welfare eligibility
Not very: 20%
Assessing welfare eligibility
Not at all: 11%
Driverless cars
Very: 13%
Driverless cars
Fairly: 40%
Driverless cars
Don’t know/​prefer not to say: 9%
Driverless cars
Not very: 25%
Driverless cars
Not at all: 13%
Mental health chatbots
Very: 9%
Mental health chatbots
Fairly: 36%
Mental health chatbots
Don’t know/​prefer not to say: 22%
Mental health chatbots
Not very: 23%
Mental health chatbots
Not at all: 10%
Black or Black British
Facial recognition for policing
Very: 44%
Facial recognition for policing
Fairly: 46%
Facial recognition for policing
Don’t know/​prefer not to say: 5%
Facial recognition for policing
Not very: 4%
Facial recognition for policing
Not at all: 1%
Assessing risk of cancer
Very: 52%
Assessing risk of cancer
Fairly: 36%
Assessing risk of cancer
Don’t know/​prefer not to say: 7%
Assessing risk of cancer
Not very: 3%
Assessing risk of cancer
Not at all: 2%
Large language models (e.g., ChatGPT)
Very: 34%
Large language models (e.g., ChatGPT)
Fairly: 46%
Large language models (e.g., ChatGPT)
Don’t know/​prefer not to say: 15%
Large language models (e.g., ChatGPT)
Not very: 3%
Large language models (e.g., ChatGPT)
Not at all: 2%
Assessing loan repayment risk
Very: 12%
Assessing loan repayment risk
Fairly: 50%
Assessing loan repayment risk
Don’t know/​prefer not to say: 11%
Assessing loan repayment risk
Not very: 18%
Assessing loan repayment risk
Not at all: 10%
Robotic care assistants
Very: 20%
Robotic care assistants
Fairly: 44%
Robotic care assistants
Don’t know/​prefer not to say: 10%
Robotic care assistants
Not very: 15%
Robotic care assistants
Not at all: 11%
Assessing welfare eligibility
Very: 10%
Assessing welfare eligibility
Fairly: 39%
Assessing welfare eligibility
Don’t know/​prefer not to say: 17%
Assessing welfare eligibility
Not very: 27%
Assessing welfare eligibility
Not at all: 7%
Driverless cars
Very: 16%
Driverless cars
Fairly: 37%
Driverless cars
Don’t know/​prefer not to say: 5%
Driverless cars
Not very: 25%
Driverless cars
Not at all: 16%
Mental health chatbots
Very: 9%
Mental health chatbots
Fairly: 40%
Mental health chatbots
Don’t know/​prefer not to say: 17%
Mental health chatbots
Not very: 25%
Mental health chatbots
Not at all: 8%
Technology Very Fairly Don’t know/​prefer not to say Not very Not at all
Asian or Asian British
Facial recognition for policing 40% 49% 4% 6% 1%
Assessing risk of cancer 47% 38% 4% 6% 5%
Large language models (e.g., ChatGPT) 30% 50% 10% 8% 2%
Assessing loan repayment risk 15% 42% 11% 26% 6%
Robotic care assistants 15% 52% 16% 11% 6%
Assessing welfare eligibility 11% 40% 18% 20% 11%
Driverless cars 13% 40% 9% 25% 13%
Mental health chatbots 9% 36% 22% 23% 10%
Black or Black British
Facial recognition for policing 44% 46% 5% 4% 1%
Assessing risk of cancer 52% 36% 7% 3% 2%
Large language models (e.g., ChatGPT) 34% 46% 15% 3% 2%
Assessing loan repayment risk 12% 50% 11% 18% 10%
Robotic care assistants 20% 44% 10% 15% 11%
Assessing welfare eligibility 10% 39% 17% 27% 7%
Driverless cars 16% 37% 5% 25% 16%
Mental health chatbots 9% 40% 17% 25% 8%

Low income
Facial recognition for policing
Very: 46%
Facial recognition for policing
Fairly: 42%
Facial recognition for policing
Don’t know/​prefer not to say: 5%
Facial recognition for policing
Not very: 5%
Facial recognition for policing
Not at all: 2%
Assessing risk of cancer
Very: 45%
Assessing risk of cancer
Fairly: 34%
Assessing risk of cancer
Don’t know/​prefer not to say: 10%
Assessing risk of cancer
Not very: 8%
Assessing risk of cancer
Not at all: 3%
Large language models (e.g., ChatGPT)
Very: 14%
Large language models (e.g., ChatGPT)
Fairly: 41%
Large language models (e.g., ChatGPT)
Don’t know/​prefer not to say: 25%
Large language models (e.g., ChatGPT)
Not very: 12%
Large language models (e.g., ChatGPT)
Not at all: 8%
Assessing loan repayment risk
Very: 10%
Assessing loan repayment risk
Fairly: 42%
Assessing loan repayment risk
Don’t know/​prefer not to say: 15%
Assessing loan repayment risk
Not very: 24%
Assessing loan repayment risk
Not at all: 9%
Robotic care assistants
Very: 10%
Robotic care assistants
Fairly: 38%
Robotic care assistants
Don’t know/​prefer not to say: 18%
Robotic care assistants
Not very: 21%
Robotic care assistants
Not at all: 13%
Assessing welfare eligibility
Very: 8%
Assessing welfare eligibility
Fairly: 34%
Assessing welfare eligibility
Don’t know/​prefer not to say: 18%
Assessing welfare eligibility
Not very: 25%
Assessing welfare eligibility
Not at all: 15%
Driverless cars
Very: 11%
Driverless cars
Fairly: 27%
Driverless cars
Don’t know/​prefer not to say: 9%
Driverless cars
Not very: 26%
Driverless cars
Not at all: 27%
Mental health chatbots
Very: 6%
Mental health chatbots
Fairly: 28%
Mental health chatbots
Don’t know/​prefer not to say: 21%
Mental health chatbots
Not very: 26%
Mental health chatbots
Not at all: 19%
Fewer digital skills
Facial recognition for policing
Very: 43%
Facial recognition for policing
Fairly: 46%
Facial recognition for policing
Don’t know/​prefer not to say: 6%
Facial recognition for policing
Not very: 4%
Facial recognition for policing
Not at all: 20%
Assessing risk of cancer
Very: 44%
Assessing risk of cancer
Fairly: 33%
Assessing risk of cancer
Don’t know/​prefer not to say: 12%
Assessing risk of cancer
Not very: 8%
Assessing risk of cancer
Not at all: 4%
Large language models (e.g., ChatGPT)
Very: 10%
Large language models (e.g., ChatGPT)
Fairly: 31%
Large language models (e.g., ChatGPT)
Don’t know/​prefer not to say: 33%
Large language models (e.g., ChatGPT)
Not very: 18%
Large language models (e.g., ChatGPT)
Not at all: 8%
Assessing loan repayment risk
Very: 10%
Assessing loan repayment risk
Fairly: 39%
Assessing loan repayment risk
Don’t know/​prefer not to say: 19%
Assessing loan repayment risk
Not very: 25%
Assessing loan repayment risk
Not at all: 7%
Robotic care assistants
Very: 8%
Robotic care assistants
Fairly: 30%
Robotic care assistants
Don’t know/​prefer not to say: 20%
Robotic care assistants
Not very: 25%
Robotic care assistants
Not at all: 16%
Assessing welfare eligibility
Very: 10%
Assessing welfare eligibility
Fairly: 33%
Assessing welfare eligibility
Don’t know/​prefer not to say: 22%
Assessing welfare eligibility
Not very: 23%
Assessing welfare eligibility
Not at all: 12%
Driverless cars
Very: 8%
Driverless cars
Fairly: 24%
Driverless cars
Don’t know/​prefer not to say: 10%
Driverless cars
Not very: 27%
Driverless cars
Not at all: 30%
Mental health chatbots
Very: 6%
Mental health chatbots
Fairly: 26%
Mental health chatbots
Don’t know/​prefer not to say: 26%
Mental health chatbots
Not very: 23%
Mental health chatbots
Not at all: 19%
Technology Very Fairly Don’t know/​prefer not to say Not very Not at all
Low income
Facial recognition for policing 46% 42% 5% 5% 2%
Assessing risk of cancer 45% 34% 10% 8% 3%
Large language models (e.g., ChatGPT) 14% 41% 25% 12% 8%
Assessing loan repayment risk 10% 42% 15% 24% 9%
Robotic care assistants 10% 38% 18% 21% 13%
Assessing welfare eligibility 8% 34% 18% 25% 15%
Driverless cars 11% 27% 9% 26% 27%
Mental health chatbots 6% 28% 21% 26% 19%
Fewer digital skills
Facial recognition for policing 43% 46% 6% 4% 20%
Assessing risk of cancer 44% 33% 12% 8% 4%
Large language models (e.g., ChatGPT) 10% 31% 33% 18% 8%
Assessing loan repayment risk 10% 39% 19% 25% 7%
Robotic care assistants 8% 30% 20% 25% 16%
Assessing welfare eligibility 10% 33% 22% 23% 12%
Driverless cars 8% 24% 10% 27% 30%
Mental health chatbots 6% 26% 26% 23% 19%

We also observed key demographic differences in perceptions of concerns. Over half of all Black (57%) and Asian (52%) people in our sample reported being fairly or very concerned about the use of facial recognition in policing, compared to 39% of the general public. The top three concerns they reported included: 1) the gathering of personal information which could be shared with third parties (for 59% of Black people and 60% of Asian people); 2) causing police to rely too heavily on technology rather than their professional judgment (for 66% of Black people and 58% of Asian people); and 3) the risk of innocent people being wrongly accused if the system makes mistakes (for 62% of Black people and 61% of Asian people). Black and Black British people are also more likely to report concerns around the use of AI to determine eligibility for welfare – 71% find this use of AI very or fairly concerning, compared to 59% of the nationally representative cohort. 

For some applications of AI, concern levels are lower among oversampled subgroups. Individuals belonging to low-income groups (42%) and those with low digital skills (38%) are significantly less concerned by general-purpose LLMs compared to the national average (47%). This is in line with their perception of the benefit of these technologies. Those with low digital skills (55%) are also less concerned by mental health chatbots compared to the nationally representative cohort (63%). And Asian and British Asian people (49%) are less concerned about robotic care assistants than the average (58%).

Figure 6 highlights demographic variances in overall levels of concern for each technology. 

Figure 6: The extent to which each AI use is perceived as concerning: demographic analysis

To what extent are you concerned about this use of this technology?’ 

Asian or Asian British
Driverless cars
Very: 30%
Driverless cars
Fairly: 43%
Driverless cars
Don’t know/​prefer not to say: 3%
Driverless cars
Not very: 17%
Driverless cars
Not at all: 7%
Assessing welfare eligibility
Very: 19%
Assessing welfare eligibility
Fairly: 34%
Assessing welfare eligibility
Don’t know/​prefer not to say: 9%
Assessing welfare eligibility
Not very: 32%
Assessing welfare eligibility
Not at all: 6%
Mental health chatbots
Very: 21%
Mental health chatbots
Fairly: 37%
Mental health chatbots
Don’t know/​prefer not to say: 12%
Mental health chatbots
Not very: 22%
Mental health chatbots
Not at all: 8%
Robotic care assistants
Very: 12%
Robotic care assistants
Fairly: 37%
Robotic care assistants
Don’t know/​prefer not to say: 9%
Robotic care assistants
Not very: 33%
Robotic care assistants
Not at all: 9%
Assessing loan repayment risk
Very: 12%
Assessing loan repayment risk
Fairly: 40%
Assessing loan repayment risk
Don’t know/​prefer not to say: 8%
Assessing loan repayment risk
Not very: 29%
Assessing loan repayment risk
Not at all: 12%
Facial recognition for policing
Very: 10%
Facial recognition for policing
Fairly: 42%
Facial recognition for policing
Don’t know/​prefer not to say: 3%
Facial recognition for policing
Not very: 36%
Facial recognition for policing
Not at all: 10%
Large language models (e.g., ChatGPT)
Very: 11%
Large language models (e.g., ChatGPT)
Fairly: 36%
Large language models (e.g., ChatGPT)
Don’t know/​prefer not to say: 9%
Large language models (e.g., ChatGPT)
Not very: 29%
Large language models (e.g., ChatGPT)
Not at all: 14%
Assessing risk of cancer
Very: 10%
Assessing risk of cancer
Fairly: 30%
Assessing risk of cancer
Don’t know/​prefer not to say: 4%
Assessing risk of cancer
Not very: 40%
Assessing risk of cancer
Not at all: 17%
Black or Black British
Driverless cars
Very: 37%
Driverless cars
Fairly: 38%
Driverless cars
Don’t know/​prefer not to say: 1%
Driverless cars
Not very: 16%
Driverless cars
Not at all: 7%
Assessing welfare eligibility
Very: 19%
Assessing welfare eligibility
Fairly: 52%
Assessing welfare eligibility
Don’t know/​prefer not to say: 6%
Assessing welfare eligibility
Not very: 20%
Assessing welfare eligibility
Not at all: 3%
Mental health chatbots
Very: 13%
Mental health chatbots
Fairly: 43%
Mental health chatbots
Don’t know/​prefer not to say: 10%
Mental health chatbots
Not very: 27%
Mental health chatbots
Not at all: 7%
Robotic care assistants
Very: 26%
Robotic care assistants
Fairly: 32%
Robotic care assistants
Don’t know/​prefer not to say: 8%
Robotic care assistants
Not very: 27%
Robotic care assistants
Not at all: 7%
Assessing loan repayment risk
Very: 15%
Assessing loan repayment risk
Fairly: 40%
Assessing loan repayment risk
Don’t know/​prefer not to say: 10%
Assessing loan repayment risk
Not very: 28%
Assessing loan repayment risk
Not at all: 7%
Facial recognition for policing
Very: 10%
Facial recognition for policing
Fairly: 47%
Facial recognition for policing
Don’t know/​prefer not to say: 3%
Facial recognition for policing
Not very: 31%
Facial recognition for policing
Not at all: 10%
Large language models (e.g., ChatGPT)
Very: 5%
Large language models (e.g., ChatGPT)
Fairly: 33%
Large language models (e.g., ChatGPT)
Don’t know/​prefer not to say: 9%
Large language models (e.g., ChatGPT)
Not very: 40%
Large language models (e.g., ChatGPT)
Not at all: 13%
Assessing risk of cancer
Very: 6%
Assessing risk of cancer
Fairly: 35%
Assessing risk of cancer
Don’t know/​prefer not to say: 7%
Assessing risk of cancer
Not very: 36%
Assessing risk of cancer
Not at all: 17%
Technology Very Fairly Don’t know/​prefer not to say Not very Not at all
Asian or Asian British
Driverless cars 30% 43% 3% 17% 7%
Assessing welfare eligibility 19% 34% 9% 32% 6%
Mental health chatbots 21% 37% 12% 22% 8%
Robotic care assistants 12% 37% 9% 33% 9%
Assessing loan repayment risk 12% 40% 8% 29% 12%
Facial recognition for policing 10% 42% 3% 36% 10%
Large language models (e.g., ChatGPT) 11% 36% 9% 29% 14%
Assessing risk of cancer 10% 30% 4% 40% 17%
Black or Black British
Driverless cars 37% 38% 1% 16% 7%
Assessing welfare eligibility 19% 52% 6% 20% 3%
Mental health chatbots 13% 43% 10% 27% 7%
Robotic care assistants 26% 32% 8% 27% 7%
Assessing loan repayment risk 15% 40% 10% 28% 7%
Facial recognition for policing 10% 47% 3% 31% 10%
Large language models (e.g., ChatGPT) 5% 33% 9% 40% 13%
Assessing risk of cancer 6% 35% 7% 36% 17%

To what extent are you concerned about this use of this technology?’ 

Low income
Driverless cars
Very: 39%
Driverless cars
Fairly: 37%
Driverless cars
Don’t know/​prefer not to say: 5%
Driverless cars
Not very: 12%
Driverless cars
Not at all: 7%
Assessing welfare eligibility
Very: 19%
Assessing welfare eligibility
Fairly: 42%
Assessing welfare eligibility
Don’t know/​prefer not to say: 9%
Assessing welfare eligibility
Not very: 25%
Assessing welfare eligibility
Not at all: 6%
Mental health chatbots
Very: 23%
Mental health chatbots
Fairly: 37%
Mental health chatbots
Don’t know/​prefer not to say: 13%
Mental health chatbots
Not very: 19%
Mental health chatbots
Not at all: 8%
Robotic care assistants
Very: 21%
Robotic care assistants
Fairly: 38%
Robotic care assistants
Don’t know/​prefer not to say: 12%
Robotic care assistants
Not very: 23%
Robotic care assistants
Not at all: 5%
Assessing loan repayment risk
Very: 11%
Assessing loan repayment risk
Fairly: 39%
Assessing loan repayment risk
Don’t know/​prefer not to say: 10%
Assessing loan repayment risk
Not very: 33%
Assessing loan repayment risk
Not at all: 7%
Facial recognition for policing
Very: 8%
Facial recognition for policing
Fairly: 33%
Facial recognition for policing
Don’t know/​prefer not to say: 3%
Facial recognition for policing
Not very: 39%
Facial recognition for policing
Not at all: 16%
Large language models (e.g., ChatGPT)
Very: 12%
Large language models (e.g., ChatGPT)
Fairly: 30%
Large language models (e.g., ChatGPT)
Don’t know/​prefer not to say: 18%
Large language models (e.g., ChatGPT)
Not very: 30%
Large language models (e.g., ChatGPT)
Not at all: 10%
Assessing risk of cancer
Very: 6%
Assessing risk of cancer
Fairly: 29%
Assessing risk of cancer
Don’t know/​prefer not to say: 8%
Assessing risk of cancer
Not very: 40%
Assessing risk of cancer
Not at all: 17%
Fewer digital skills
Driverless cars
Very: 39%
Driverless cars
Fairly: 37%
Driverless cars
Don’t know/​prefer not to say: 4%
Driverless cars
Not very: 14%
Driverless cars
Not at all: 6%
Assessing welfare eligibility
Very: 14%
Assessing welfare eligibility
Fairly: 40%
Assessing welfare eligibility
Don’t know/​prefer not to say: 12%
Assessing welfare eligibility
Not very: 26%
Assessing welfare eligibility
Not at all: 7%
Mental health chatbots
Very: 19%
Mental health chatbots
Fairly: 36%
Mental health chatbots
Don’t know/​prefer not to say: 17%
Mental health chatbots
Not very: 20%
Mental health chatbots
Not at all: 8%
Robotic care assistants
Very: 25%
Robotic care assistants
Fairly: 35%
Robotic care assistants
Don’t know/​prefer not to say: 14%
Robotic care assistants
Not very: 21%
Robotic care assistants
Not at all: 5%
Assessing loan repayment risk
Very: 10%
Assessing loan repayment risk
Fairly: 39%
Assessing loan repayment risk
Don’t know/​prefer not to say: 15%
Assessing loan repayment risk
Not very: 31%
Assessing loan repayment risk
Not at all: 6%
Facial recognition for policing
Very: 6%
Facial recognition for policing
Fairly: 30%
Facial recognition for policing
Don’t know/​prefer not to say: 5%
Facial recognition for policing
Not very: 42%
Facial recognition for policing
Not at all: 17%
Large language models (e.g., ChatGPT)
Very: 9%
Large language models (e.g., ChatGPT)
Fairly: 29%
Large language models (e.g., ChatGPT)
Don’t know/​prefer not to say: 25%
Large language models (e.g., ChatGPT)
Not very: 26%
Large language models (e.g., ChatGPT)
Not at all: 10%
Assessing risk of cancer
Very: 6%
Assessing risk of cancer
Fairly: 32%
Assessing risk of cancer
Don’t know/​prefer not to say: 10%
Assessing risk of cancer
Not very: 38%
Assessing risk of cancer
Not at all: 14%
Technology Very Fairly Don’t know/​prefer not to say Not very Not at all
Low income
Driverless cars 39% 37% 5% 12% 7%
Assessing welfare eligibility 19% 42% 9% 25% 6%
Mental health chatbots 23% 37% 13% 19% 8%
Robotic care assistants 21% 38% 12% 23% 5%
Assessing loan repayment risk 11% 39% 10% 33% 7%
Facial recognition for policing 8% 33% 3% 39% 16%
Large language models (e.g., ChatGPT) 12% 30% 18% 30% 10%
Assessing risk of cancer 6% 29% 8% 40% 17%
Fewer digital skills
Driverless cars 39% 37% 4% 14% 6%
Assessing welfare eligibility 14% 40% 12% 26% 7%
Mental health chatbots 19% 36% 17% 20% 8%
Robotic care assistants 25% 35% 14% 21% 5%
Assessing loan repayment risk 10% 39% 15% 31% 6%
Facial recognition for policing 6% 30% 5% 42% 17%
Large language models (e.g., ChatGPT) 9% 29% 25% 26% 10%
Assessing risk of cancer 6% 32% 10% 38% 14%

The findings above highlight that attitudes to AI are multifaceted in several aspects. First, AI is not considered to be a single entity, with perceptions varying towards specific applications. For example, the public has positive attitudes towards general-purpose LLMs, with high overall benefit scores and low levels of reported concern, and more negative attitudes towards mental health chatbots. This shows how the context each technology is applied to matters. 

Second, people can simultaneously perceive benefits and risks associated with different applications of AI. For each of the applications of AI we asked about, people reported differential levels of both perceived benefit and concern that are not mirror images of each other. 

Third, minoritised demographic groups perceive differential levels of benefits and concerns for each AI application, as we explore above. For instance, people on lower incomes have higher levels of concern for many of the applications of AI we asked about compared to the national average. It is therefore important to consider the views of diverse publics when trying to understand public sentiment towards different applications of AI.

While perceptions of beneficial impact have remained stable, overall concern around AI uses have increased since 2022/23

When comparing responses across both waves of our survey, perceptions of benefit have remained relatively stable (except in the case of facial recognition for policing, where perceptions of benefit have increased slightly) while levels of concern have significantly increased across all applications of AI. However, it is important to note that the comparisons should be read with caution as the sample composition across the two waves is different (refer to the discussion in the methodology section). 

Figures 7 and 8 show a comparison of perceptions of benefit and concern across both surveys for all repeated uses of AI. For example, in 2022/23, 44% of the public were concerned by the use of AI for determining welfare eligibility. This has increased to 59% in 2024/25.

Figure 7: The extent to which each AI use is perceived as beneficial: survey wave comparison

To what extent do you think the use of this technology will be beneficial?’ 

2022/23
Facial recognition for policing
Very: 45%
Facial recognition for policing
Fairly: 41%
Facial recognition for policing
Don’t know/​prefer not to say: 6%
Facial recognition for policing
Not very: 4%
Facial recognition for policing
Not at all: 3%
Assessing risk of cancer
Very: 53%
Assessing risk of cancer
Fairly: 35%
Assessing risk of cancer
Don’t know/​prefer not to say: 8%
Assessing risk of cancer
Not very: 3%
Assessing risk of cancer
Not at all: 2%
Assessing loan repayment risk
Very: 11%
Assessing loan repayment risk
Fairly: 46%
Assessing loan repayment risk
Don’t know/​prefer not to say: 16%
Assessing loan repayment risk
Not very: 18%
Assessing loan repayment risk
Not at all: 8%
Robotic care assistants
Very: 17%
Robotic care assistants
Fairly: 42%
Robotic care assistants
Don’t know/​prefer not to say: 16%
Robotic care assistants
Not very: 15%
Robotic care assistants
Not at all: 10%
Assessing welfare eligibility
Very: 9%
Assessing welfare eligibility
Fairly: 37%
Assessing welfare eligibility
Don’t know/​prefer not to say: 2,300%
Assessing welfare eligibility
Not very: 21%
Assessing welfare eligibility
Not at all: 11%
Driverless cars
Very: 16%
Driverless cars
Fairly: 31%
Driverless cars
Don’t know/​prefer not to say: 9%
Driverless cars
Not very: 24%
Driverless cars
Not at all: 21%
2024/25
Facial recognition for policing
Very: 49%
Facial recognition for policing
Fairly: 42%
Facial recognition for policing
Don’t know/​prefer not to say: 3%
Facial recognition for policing
Not very: 4%
Facial recognition for policing
Not at all: 2%
Assessing risk of cancer
Very: 52%
Assessing risk of cancer
Fairly: 34%
Assessing risk of cancer
Don’t know/​prefer not to say: 7%
Assessing risk of cancer
Not very: 5%
Assessing risk of cancer
Not at all: 2%
Assessing loan repayment risk
Very: 11%
Assessing loan repayment risk
Fairly: 46%
Assessing loan repayment risk
Don’t know/​prefer not to say: 13%
Assessing loan repayment risk
Not very: 23%
Assessing loan repayment risk
Not at all: 7%
Robotic care assistants
Very: 13%
Robotic care assistants
Fairly: 42%
Robotic care assistants
Don’t know/​prefer not to say: 15%
Robotic care assistants
Not very: 19%
Robotic care assistants
Not at all: 10%
Assessing welfare eligibility
Very: 9%
Assessing welfare eligibility
Fairly: 38%
Assessing welfare eligibility
Don’t know/​prefer not to say: 17%
Assessing welfare eligibility
Not very: 25%
Assessing welfare eligibility
Not at all: 11%
Driverless cars
Very: 14%
Driverless cars
Fairly: 31%
Driverless cars
Don’t know/​prefer not to say: 7%
Driverless cars
Not very: 27%
Driverless cars
Not at all: 20%
Technology Very Fairly Don’t know/​prefer not to say Not very Not at all
2022/23
Facial recognition for policing 45% 41% 6% 4% 3%
Assessing risk of cancer 53% 35% 8% 3% 2%
Assessing loan repayment risk 11% 46% 16% 18% 8%
Robotic care assistants 17% 42% 16% 15% 10%
Assessing welfare eligibility 9% 37% 2300% 21% 11%
Driverless cars 16% 31% 9% 24% 21%
2024/25
Facial recognition for policing 49% 42% 3% 4% 2%
Assessing risk of cancer 52% 34% 7% 5% 2%
Assessing loan repayment risk 11% 46% 13% 23% 7%
Robotic care assistants 13% 42% 15% 19% 10%
Assessing welfare eligibility 9% 38% 17% 25% 11%
Driverless cars 14% 31% 7% 27% 20%

Figure 8: The extent to which each AI use is perceived as concerning: survey wave comparison

To what extent are you concerned by use of this technology?’ 

2022/23
Driverless cars
Very: 31%
Driverless cars
Fairly: 41%
Driverless cars
Don’t know/​prefer not to say: 2%
Driverless cars
Not very: 18%
Driverless cars
Not at all: 7%
Robotic care assistants
Very: 16%
Robotic care assistants
Fairly: 32%
Robotic care assistants
Don’t know/​prefer not to say: 9%
Robotic care assistants
Not very: 30%
Robotic care assistants
Not at all: 30%
Assessing welfare eligibility
Very: 12%
Assessing welfare eligibility
Fairly: 32%
Assessing welfare eligibility
Don’t know/​prefer not to say: 13%
Assessing welfare eligibility
Not very: 31%
Assessing welfare eligibility
Not at all: 12%
Assessing loan repayment risk
Very: 8%
Assessing loan repayment risk
Fairly: 33%
Assessing loan repayment risk
Don’t know/​prefer not to say: 10%
Assessing loan repayment risk
Not very: 37%
Assessing loan repayment risk
Not at all: 11%
Facial recognition for policing
Very: 8%
Facial recognition for policing
Fairly: 26%
Facial recognition for policing
Don’t know/​prefer not to say: 3%
Facial recognition for policing
Not very: 37%
Facial recognition for policing
Not at all: 25%
Assessing risk of cancer
Very: 4%
Assessing risk of cancer
Fairly: 20%
Assessing risk of cancer
Don’t know/​prefer not to say: 6%
Assessing risk of cancer
Not very: 39%
Assessing risk of cancer
Not at all: 31%
2024/25
Driverless cars
Very: 33%
Driverless cars
Fairly: 42%
Driverless cars
Don’t know/​prefer not to say: 3%
Driverless cars
Not very: 16%
Driverless cars
Not at all: 6%
Robotic care assistants
Very: 19%
Robotic care assistants
Fairly: 39%
Robotic care assistants
Don’t know/​prefer not to say: 8%
Robotic care assistants
Not very: 27%
Robotic care assistants
Not at all: 6%
Assessing welfare eligibility
Very: 16%
Assessing welfare eligibility
Fairly: 43%
Assessing welfare eligibility
Don’t know/​prefer not to say: 7%
Assessing welfare eligibility
Not very: 26%
Assessing welfare eligibility
Not at all: 8%
Assessing loan repayment risk
Very: 9%
Assessing loan repayment risk
Fairly: 41%
Assessing loan repayment risk
Don’t know/​prefer not to say: 7%
Assessing loan repayment risk
Not very: 34%
Assessing loan repayment risk
Not at all: 9%
Facial recognition for policing
Very: 7%
Facial recognition for policing
Fairly: 32%
Facial recognition for policing
Don’t know/​prefer not to say: 2%
Facial recognition for policing
Not very: 40%
Facial recognition for policing
Not at all: 19%
Assessing risk of cancer
Very: 4%
Assessing risk of cancer
Fairly: 26%
Assessing risk of cancer
Don’t know/​prefer not to say: 5%
Assessing risk of cancer
Not very: 43%
Assessing risk of cancer
Not at all: 21%
Technology Very Fairly Don’t know/​prefer not to say Not very Not at all
2022/23
Driverless cars 31% 41% 2% 18% 7%
Robotic care assistants 16% 32% 9% 30% 30%
Assessing welfare eligibility 12% 32% 13% 31% 12%
Assessing loan repayment risk 8% 33% 10% 37% 11%
Facial recognition for policing 8% 26% 3% 37% 25%
Assessing risk of cancer 4% 20% 6% 39% 31%
2024/25
Driverless cars 33% 42% 3% 16% 6%
Robotic care assistants 19% 39% 8% 27% 6%
Assessing welfare eligibility 16% 43% 7% 26% 8%
Assessing loan repayment risk 9% 41% 7% 34% 9%
Facial recognition for policing 7% 32% 2% 40% 19%
Assessing risk of cancer 4% 26% 5% 43% 21%

People are more concerned about robotics than other technologies, and this concern has increased over the last two years

To further understand the relationship between perceptions of benefit and concern around each AI technology, we created a net benefit score for each AI use by subtracting the extent to which each person indicates the AI use was concerning to them from the extent to which they indicate the AI use was beneficial. Positive scores show that perceived benefit outweighs concern, while negative scores show concern outweighs perceived benefit. Scores of zero indicate equal levels of concern and perceived benefit (Figure 9). 

We found that for four out of the eight AI uses, perceived benefit outweighs concern: assessing risk of cancer from a scan, facial recognition for policing, LLMs and assessing loan repayment risk. For the remaining four uses, concern outweighs perceived benefit: robotic care assistants, assessing welfare eligibility, mental health chatbots and driverless cars. Looking across both waves of the survey, we find a declining trend in net benefit scores.

For the two new technologies introduced in this wave, general-purpose LLMs like ChatGPT received a positive net benefit score (0.38), while mental health chatbots were viewed negatively (-0.56). As mentioned previously, this might indicate that, while the public sees some value in general-purpose LLMs, there is more hesitation about their application in sensitive areas like mental health support. However, it is important to note that we defined mental health chatbots as those potentially powered by LLMs, and people might not be sufficiently aware of this distinction, so this interpretation remains speculative.

The use of AI for assessing risk of cancer and for facial recognition in policing continue to retain high net benefit scores (1.34 and 1.17, respectively), as in the previous survey wave. However, both scores have declined (from 1.58 and 1.23, respectively). AI applications in welfare and driverless cars continue to face scepticism, with net negative scores as in the previous survey wave. 

Notably, people are more concerned about robotics than other technologies, and this concern has increased over the last two years. Perceptions of driverless cars have become more negative overall compared to the previous survey wave. And perceptions around robotic care assistants have dipped into negative territory (-0.07), suggesting increased hesitation about their role in caregiving. 

The decrease in net benefit scores for the risk and eligibility technologies, such as those for welfare benefits and loan repayment, might be indicative of growing concerns about their fairness or effectiveness (explored in the next section). 

Figure 9: Net benefit scores

Overall concern for each use of AI subtracted from overall perceptions of benefit (positive scores indicate that benefits outweigh concerns, while negative scores indicate that concerns outweigh benefits)

Wave 1 (2022/23) and Wave 2 (2024/25)

2023/24
Assessing risk of cancer
1.58
Facial recognition for policing
1.23
Large language models (e.g., ChatGPT)
Assessing loan repayment risk
0.32
Robotic care assistants
0.21
Assessing welfare eligibility
0.04
Mental health chatbots
Driverless cars
-0.52
2024/25
Assessing risk of cancer
1.34
Facial recognition for policing
1.17
Large language models (e.g., ChatGPT)
0.38
Assessing loan repayment risk
0.17
Robotic care assistants
-0.07
Assessing welfare eligibility
-0.2
Mental health chatbots
-0.56
Driverless cars
-0.64
Use of AI
2023/24

2024/25
Assessing risk of cancer 1.58 1.34
Facial recognition for policing 1.23 1.17
Large language models (e.g., ChatGPT) 0.38
Assessing loan repayment risk 0.32 0.17
Robotic care assistants 0.21 -0.07
Assessing welfare eligibility 0.04 -0.2
Mental health chatbots -0.56
Driverless cars -0.52 -0.64

Low income groups in particular are more likely to feel their concerns outweigh perceived benefits of AI

To understand in more detail the association of specific demographic characteristics on overall attitudes to each application of AI, we conducted a regression analysis examining the extent to which an individual’s income, digital skills, awareness of each AI application, age, gender and education level predicts their net benefit’ score for each technology. 

We found that within these factors, income status seems to be driving attitudes. When all other variables are held constant, those on low incomes still have significantly lower net benefit scores than those with higher incomes. This finding suggests that being on a low income may be linked with less acceptance of AI technologies. This could be due to concerns around accessibility, fairness and potential biases in decision making that could impact financial stability – such as through determining eligibility for welfare benefits or loans. It presents a case for understanding in more detail the concerns those on lower incomes have of these technologies and whether and how these technologies can be designed to benefit them. 

The Appendix section Predictors of net benefit scores for each technology’ provides more information about the analyses outlined in this section, including further results showing the effects of demographic and attitudinal differences on perceived net benefit for each technology.

Image credit: Muhammad Raufan Yusup on Unsplash