Explore the findings

Overview

These findings are from the latest survey (2025). Explore previous findings: 2023

AI technologies are proliferating rapidly across society with substantial global investment, yet the discourse and research around public attitudes towards these technologies remain incomplete.

The UK is made up of many publics that are differentially impacted by new technologies. Some applications of AI technologies bring up unique concerns and attitudes from those on lower incomes, those from minoritised ethnic backgrounds and those with fewer digital skills.

Current research suffers from two fundamental gaps: 1) it typically treats AI as a single entity rather than examining specific contextual applications, and 2) it does not adequately represent marginalised voices. These blind spots not only hinder responsible design and development but also preclude effective governance and regulation that could address socioeconomic inequalities.

We welcome the recognition in the UK’s AI Opportunities Action Plan that Government must protect UK citizens from the most significant risks presented by AI and foster public trust in the technology, particularly considering the interests of marginalised groups’; although there is a gap in both specific commitments against this ambition, and in understanding people’s perceptions of AI risks and what would foster public trust. The risk of failing to meet this gap was recently stated by the Secretary of State Peter Kyle:

Trust is incredibly important in this whole agenda. We have seen too many times in the past where a fearful public have failed to fully grasp the potential for innovation coming out of the scientific community in this country. We are not going to make that mistake. We understand from the outset that to take the public with us we must inspire confidence.’1

To address these broad gaps in understanding, we conducted a nationally representative survey of 3,513 UK residents in November 2024. This survey, which is part of the UKRI-funded Public Voices in AI project,2 is the second iteration of How do people feel about AI?,3 a national survey of attitudes to AI. Our sample was representative of the UK public across age, sex, income, education and ethnicity, among other demographic factors. To strengthen principles of equity and inclusion in our survey design, we deliberately oversampled groups often underrepresented in survey-based research: those with low digital skills, those on lower incomes, and people of Black or Asian ethnicities.

We asked people about their awareness of, experience with and attitudes towards eight different uses of AI – six of which were repeated from the 2022/23 survey. The two new technologies were applications of AI launched after the data collection for our 2022/23 survey: general-purpose large language models (LLMs) and mental health chatbots. Four applications are already in use: facial recognition in policing, which is well covered in the media, and technologies for assessing eligibility for welfare benefits, cancer risk or loan repayment, which are less visible in public discourse. We also asked about applications of AI that are not yet part of people’s everyday experiences, such as driverless cars and robotic care assistants. 

For each specific use of AI, we asked people what they believed were the key benefits and concerns, recognising that these perceptions are contextual. People’s perceptions vary in three respects: 1) the specific context of each application of AI, 2) the different demographic groups people identify with, and 3) seeing both potential benefit and concern simultaneously. We also asked people about their views around issues such as AI-generated decision making and harms, and how they would like to see these technologies regulated and governed.

If a significant gap is allowed to develop between public expectations around protection from AI impacts and government action, this could risk igniting a public backlash against AI that would significantly limit its potential benefits

How to read the findings

Introduction

Countries and companies worldwide are investing in rapidly deploying AI technologies, leading to unprecedented advancements in AI capabilities. From DeepSeek R1 to OpenAI’s o3, AI models can be used to undertake complex tasks – including reasoning, writing software, generating hyperrealistic images and videos, and engaging in multi-turn open-ended conversations – as well as to contribute to addressing broader challenges such as modernising public services.4 

When designed responsibly and safely, these technologies have the potential to improve people’s lives. However, concerns persist that AI could also exacerbate the socioeconomic inequalities and sense of disempowerment that have had such significant impacts on national political landscapes. Key concerns include job displacement, biases that mean AI tools do not work as intended and risks to safety.

Effectively regulating these trade-offs requires understanding public perspectives, especially as AI becomes increasingly embedded in daily life. The ways individuals across different demographic groups experience and perceive AI provide valuable insights to support its responsible adoption, development and regulation. Without active public involvement, there is a risk of creating an AI-cracy’, where a small, privileged group controls AI development and governance to the detriment of broader society.5 If a significant gap is allowed to develop between public expectations around protection from AI impacts and government action, this could risk igniting a public backlash against AI that would significantly limit its potential benefits. 

However, there is currently a lack of evidence on how people view AI. Existing studies have two major limitations: they often focus on AI as a single entity or product rather than examining specific applications, and they do not represent or examine marginalised or underrepresented voices. These gaps in understanding hinder the government’s monitoring of AI’s impact, and consequently its decision making about and effective development of regulation and accountability mechanisms.

To address these gaps, the Ada Lovelace Institute and The Alan Turing Institute conducted a nationally representative survey to assess the UK public’s attitudes towards eight AI applications in risk and eligibility assessment, facial recognition, LLMs and mental health chatbots, and robotics. This study, which is part of the UKRI-funded Public Voices in AI project,6 marks the second iteration of the How do people about AI? survey.7 It explores public awareness of AI technologies, concerns, perceived benefits and differences in attitudes across demographic groups. Additionally, to inform policy action, it examines public opinions and expectations about AI governance and regulation. While previous studies have explored related issues, this survey distinguishes itself through three key features.

How we define AI

Recognising that AI is a broad and evolving field, and that public perceptions may vary based on specific applications,7 this study seeks to understand public attitudes towards specifically defined AI use cases. Other research often relies on broad definitions of AI, providing limited application-specific insights,8, 9 10 11 or focuses on singular use cases, such as attitudes towards biometrics in policing and law enforcement.12 These approaches make it difficult to capture public sentiment comprehensively.

Our survey enables respondents to express both benefits and concerns associated with distinct AI applications. In this wave, we examined public attitudes towards the following AI categories:

  • Risk and eligibility assessments and facial recognition: Assessing eligibility for welfare benefits, assessing risk of cancer from a scan, assessing risk of repaying a loan, and facial recognition for policing
  • LLMs and mental health chatbots: General-purpose large language models (LLMs) and mental health chatbots
  • Robotics: Driverless cars and robotic care assistants.

Understanding diverse perspectives

This study highlights the perspectives of different demographic groups, especially those marginalised in conversations and research about AI. We focus on:

  • people on lower incomes
  • digitally excluded people
  • people from minoritised ethnic groups, such as Black, Black British, Asian and Asian British people.

Recognising diverse lived experiences is crucial, as social conditions significantly influence how AI affects individuals and groups of people in different contexts. Privilege and disadvantage shape who can influence and benefit from AI systems. For instance, it is well documented that AI systems can encode a range of biases, including those relating to race, gender and ability.13 These biases can range from facial recognition technology that classifies White male faces with more accuracy than darker-skinned women,14 to algorithms that disadvantage women in recruitment.15 If these inequalities are not addressed in evidence around AI, they risk becoming exacerbated. As AI continues to reshape sectors such as healthcare, law and social welfare, it is vital to include diverse voices in discussions about its future to prevent deepening societal divisions.16

Insights for regulation

Understanding public attitudes towards AI governance is essential for decision-makers shaping policies that support accountability, fairness and transparency and for determining when and how people expect to be protected from any negative impacts of AI. However, few large-scale studies have explored public preferences for AI regulation or the level of explainability expected in AI decision-making processes. Our research aims to fill these gaps by examining:

  • mechanisms that increase people’s comfort towards the adoption of AI
  • concerns about AI-generated decisions and preferences for explainability versus accuracy in AI decision -making
  • concerns around AI safety and trust in different stakeholders in relation to regulation and governance
  • concerns related to data sharing, and representation in decision making.

By exploring these factors, our research aims to inform policymakers, technology developers and regulators on how AI can be developed and governed in ways that reflect societal values and public preferences.

Ultimately, this study contributes to building the evidence base on diverse public attitudes towards distinct AI applications and their regulation.

References

  1. Artificial Intelligence Opportunities Action Plan — Hansard — UK Parliament’ (13 March 2025) <https://hansard.parliament.uk/…> accessed 13 March 2025.  Back
  2. Public Voices in AI’ (ESRC Digital Good Network) <https://digitalgood.net/dg-res…> accessed 13 March 2025.  Back
  3. Ada Lovelace Institute and Alan Turing Institute, How Do People Feel about AI? A Nationally Representative Survey of Public Attitudes to Artificial Intelligence in Britain’ (2023) <https://www.adalovelaceinstitu…> accessed 6 June 2023.  Back
  4. Jonathan Bright and others, Generative AI Is Already Widespread in the Public Sector’ (arXiv, 2 January 2024) <http://arxiv.org/abs/2401.0129…> accessed 13 March 2025.  Back
  5. Reema Patel, A Framework and Self Assessment Workbook for Including Public Voices in AI (Elgon Social Research and ESRC Digital Good Network) <https://digitalgood.net/dg-research/public-voices-in-ai/> accessed 13 March 2025.  Back
  6. Public Voices in AI’ (ESRC Digital Good Network) <https://digitalgood.net/dg-res…> accessed 13 March 2025.  Back
  7. Ada Lovelace Institute and Alan Turing Institute, How Do People Feel about AI? A Nationally Representative Survey of Public Attitudes to Artificial Intelligence in Britain’ (2023) <https://www.adalovelaceinstitu…> accessed 6 June 2023.  Back
  8. Workday, 2024 Global Study: Closing the AI Trust Gap’ (2024) <https://forms.workday.com/en‑u…> accessed 13 March 2025.  Back
  9. American Psychological Association, 2023 Work in America Survey: Artificial Intelligence, Monitoring Technology, and Psychological Well-Being’ (APA 2023) <https://www.apa.org/pubs/repor…> accessed 26 September 2023.  Back
  10. Alec Tyson and Emma Kikuchi, Growing Public Concern about the Role of Artificial Intelligence in Daily Life’ (Pew Research Center, 28 August 2023) <https://www.pewresearch.org/sh…> accessed 13 March 2025.  Back
  11. Ipsos, Public Trust in AI: Implications for Policy and Regulation’ (2024) <https://www.ipsos.com/sites/de…> accessed 13 March 2025.  Back
  12. Sam Stockwell and others, The Future of Biometric Technology for Policing and Law Enforcement’ (Centre for Emerging Technology and Security, March 2024) <https://cetas.turing.ac.uk/pub…> accessed 13 March 2025.  Back
  13. Meredith Broussard, More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech (MIT Press, 2023).  Back
  14. Joy Buolamwini and Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, Conference on fairness, accountability and transparency (2018). Back
  15. Reuters, Insight — Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women’ (2018) <https://www.reuters.com/articl…> accessed 17 March 2025.  Back
  16. Reema Patel,​‘A Framework and Self Assessment Workbook for Including Public Voices in AI (Elgon Social Research and ESRC Digital Good Network) <https://digitalgood.net/dg-research/public-voices-in-ai/> accessed 13 March 2025.  Back