Conclusion
This report offers insights into how publics perceive AI technologies, and their expectations around its regulation and governance, with some specific insights about minoritised groups and people.
It follows on from our 2022/23 survey, which took place before the emergence of technologies like ChatGPT in public discourse, providing new insights into how attitudes to AI might be changing over time.
In the last two years, the public have become more concerned by many applications of AI. At the same time, their preference for laws and regulation has increased
The findings reiterate the importance of considering AI technologies in the context in which they are applied. While attitudes towards technologies in health diagnostics – e.g. detecting the risk of cancer from a scan – are largely positive, attitudes towards the use of AI in the delivery of care – e.g. mental health chatbots and robotic care assistants – are largely negative. At the same time, across applications of AI the public can recognise distinct potential benefits and identify areas of concern.
Just as attitudes to AI are multifaceted, so are UK publics. For instance, the survey found that Black/Black British and Asian/Asian British publics are significantly more concerned about facial recognition for policing than the general public. We know from existing evidence that people of colour are more likely to be disproportionately negatively affected by the deployment of these technologies.1 Decision making around AI should consider the distinct impacts of AI technologies on diverse communities, and seek to embed their views and values – in recognition that different publics offer new insights into whether and how AI is used across different contexts.
The public typically see potential advantages of many applications of AI around improving efficiency and accuracy. However, people are also increasingly concerned about the safety of their personal data, as well as the replacement of humans in decision making. These attitudes point to a need to evidence whether AI systems are meeting public expectations around efficiency and accuracy, and how these systems can address public concerns.
Awareness of AI also fluctuates across applications. Emergent technologies like general-purpose LLMs, as well as less everyday applications such as driverless cars, have relatively high levels of public awareness. At the same time, awareness of more behind-the-scenes applications of AI, such as the use of AI to assess welfare eligibility, is significantly lower. Transparency around the use of AI systems is needed to ensure the public are aware of less visible, but highly impactful, uses of AI – especially those that have the potential to disproportionately affect those who are already marginalised in society.
To ensure that the introduction of AI-enabled systems in public sector services works for diverse publics, policymakers must engage and consult these publics to capture the range of attitudes towards and concerns about AI expressed by different groups. Capturing diverse perspectives may help to identify high-risk use cases, novel concerns or harms, and/or potential governance measures that are needed to garner public trust and support adoption.
In the last two years, the public have become more concerned by many applications of AI. At the same time, their preference for laws and regulation has increased. This rise in demand for laws and regulation comes at a time when the UK does not have its own set of comprehensive regulations around AI. The evidence suggests that the public support a multi-stakeholder approach to AI safety, with high expectations of both an independent regulator to ensure AI is used safely and of the companies developing AI technologies.
The UK government has repeatedly delayed consultation on AI legislation to address the potential risks and harms of some of these technologies, which stands in direct contrast to the concerns of the public and their growing desire for regulation. This tension presents a risk of low adoption or even backlash if AI technologies and the protections people are afforded around them do not meet public expectations. Delivering on the commitment in the AI Opportunities Action Plan to ‘funding regulators to scale up their AI capabilities, some of which need urgent addressing’,2 will support meeting this expectation, recognising that – in the absence of legislation – regulators will need substantial resources, capabilities and expertise to build consideration of AI into their horizon-scanning, guidance and enforcement.
For AI to be developed and deployed responsibly, the hopes, concerns and experiences of the public need to be accounted for. Decision makers and AI developers need to listen to the voices of the public to ensure AI tools work for people and society, rather than further entrench inequalities in society. For example, in addition to traditional consultation methods (which target industry, academia or policy experts), policymakers should look to include evidence of public views, where it exists, and if appropriate engage diverse publics in public deliberation workshops on policy proposals.
References
- Thaddeus L Johnson and others, ‘Facial Recognition Systems in Policing and Racial Disparities in Arrests’ (2022) 39 Government Information Quarterly 101753. Back
- Artificial Intelligence Opportunities Action Plan — Hansard — UK Parliament’ (13 March 2025) <https://hansard.parliament.uk/…> accessed 13 March 2025. Back