Key findings

These findings are from the latest survey (2025). Explore previous findings: 2023

Blurred image of people walking outside, shot from above.

We asked people about their awareness of, experience with and attitudes towards eight different uses of AI – six of which were repeated from the 2022/23 survey.

Awareness varies significantly across AI applications.

People have a high awareness of technologies commonly featured in media discourse. For example, 93% of the public have heard of driverless cars, 90% have heard of facial recognition in policing and 61% have heard of LLMs. We found that technologies which are increasingly used in public services remain largely invisible to members of the public. Despite their significant potential to impact people, especially the most vulnerable, awareness of AI for assessing eligibility for welfare benefits like Universal Credit (18%), robotic care assistants designed to carry out physical tasks in care settings such as hospitals and nursing homes (24%) and tools assessing how likely a person is to repay a loan, such as a mortgage (24%) is low.

General-purpose LLMs such as ChatGPT have acquired rapid levels of awareness and use.

61% of the public have heard of LLMs and 40% have used them. This is rapid penetration for an AI application that began to receive media coverage only in December 2022. However, openness to the use of these tools is context dependent. For example, 67% of people have used, or are open to using LLMs for searching for answers and recommendations. This figure drops to 53% for using LLMs to support job applications.

40%

40% of the public have used LLMs

Since 2022/23, perceptions of overall benefit for most AI technologies have remained stable, while concern levels have increased.

Where previously benefits outweighed concerns for five of the six technologies (all except driverless cars), in the current wave, benefits outweigh concerns for only three (cancer risk assessment, facial recognition in policing, and assessing loan repayment risk). The rise in concern is particularly notable for the use of AI in assessing welfare eligibility. In 2022/23, 44% of people were concerned by this technology. This has risen to 59% in 2024/25.

Different demographic groups have distinct attitudes to applications of AI.

In the case of facial recognition for policing, while 39% of the general population expresses concern about its use in policing, this rises to over half among Black (57%) and Asian (52%) people. Some of their concerns are also more strongly held than the general public: 66% of Black people and 62% of Asian people are concerned by false accusations compared to 54% of the general public. Similarly, people on lower incomes consistently report lower net benefit scores across AI technologies compared to people on higher incomes (meaning they are more likely to see their concerns around a technology as outweighing their perceived levels of benefit for that technology), even when holding other demographic variables constant. This indicates that income status also influences perceptions of AI technologies.

57%

57% of Black people and 52% of Asian people are concerned by the use of facial recognition in policing

The UK public hold nuanced views on the specific benefits and concerns associated with different uses of AI.

Overall, people most commonly identify speed and efficiency improvements as key AI benefits, while their top concerns centre on overreliance on technology over professional judgement, errors, and lack of transparency in decision making. Within this overall pattern, benefits and concerns vary by AI application. Even for applications with high levels of perceived benefit, people had concerns. For the use of AI in cancer detection, 64% worried about the loss of professional judgement due to overreliance on technology, while for facial recognition, 54% were concerned about fairness due to the risk of false accusations. For the least popular application, driverless cars, 63% saw accessibility as a major benefit. Taken together, these points show how people can simultaneously hold perceptions of benefits and concerns, and how perceptions vary across the specific contexts in which AI technologies are used. 

The public self-report high exposure to AI-generated harms.

Overall, close to two-thirds of the UK public (67%) have experienced any form of harm a few times, while over a third (39%) have encountered any form of harm many times. The most common harms people report experiencing are false information (61%), financial fraud (58%) and deepfakes (58%).

The public expect the government to be equipped in relation to AI safety.

58% of people believe both an independent regulator and AI companies should be responsible for ensuring AI is used safely, and the majority (over 75%) feel it is very important’ for the government or independent regulators to have a suite of safety powers, rather than private companies alone having this control. While younger people (18–44) favour company responsibility, those over 55 prefer regulators, reflecting differing levels of trust and expectations based on age. These expectations are important given the risks to safety people already report experiencing, and the pace at which advancements in AI are being made.

The public increasingly want laws and regulation in order to be more comfortable with AI technologies.

The majority of the public (72%) indicate that laws and regulations would increase their comfort with AI technologies – an increase from 62% from the 2022/23 survey. This rise in demand for laws and regulation comes at a time when the UK does not have its own set of comprehensive regulations around AI. 65% of people said that procedures for appealing decisions made by AI would make them feel more comfortable with AI, along with 61% who felt getting information on how AI systems made decisions about them would increase their comfort levels. This will be significant in the context of upcoming regulatory decisions, for example, forthcoming UK government changes to the law around automated decision making.

Image credit: Timon Studler on Unsplash