Methodology
A summary of the key aspects of how the survey was run.
How do people feel about AI? is a tracker survey of public attitudes to AI in the UK. It explores perceptions of benefit and levels of concern around different applications of AI, as well as expectations around governance and regulation. It is currently on its second survey which was conducted in 2024/25 and covers the UK. The original survey ran in 2022/23 and was focused on Great Britain.
2025
For a more detailed account of how we designed our survey and our sampling approach, please refer to our technical report. The survey was approved by the Ethics Committee at The Alan Turing Institute, UK (approval number: 24091908). The survey materials and data will be available as open access resources on publication.
Sample
The sample was drawn from the National Centre for Social Research Opinion Panel. This is a standing panel of people who have been recruited based on random probability design, meaning they were selected at random to be invited onto the panel. For this survey, a random subsample of 5,650 panel members was invited to take part. Fieldwork ran from 25 October 2024 until 24 November 2024. We achieved a 62% response rate with a final sample of 3,513 participants. The majority of respondents (94%) completed the questionnaire online. Two hundred and twenty-two people were interviewed by telephone either because they do not use the internet or because this was their preference.
We were interested in exploring in more depth the views and experiences of people in the following groups:
- Individuals on lower incomes, measured as those with an equivalised monthly household income of £1,500 or less. When asked about their perception of their financial status, the majority of people in this group felt they were either finding things difficult financially or just about getting by.
- Digitally excluded populations, measured as those with low digital skills. We used an adapted measure from Lloyds’ Consumer Digital Index1 to understand digital skills. This captures whether participants can perform a range of digital tasks under the broad skills of ‘managing information’, ‘communicating’, ‘transacting’, ‘problem solving’ and ‘creating’. Those who could not do at least one task under each skill were classed as having low levels of digital skills. Many of those interviewed over the phone (65%) fell into this group.
- People from Black/Black British and Asian/Asian British ethnic backgrounds.
The focus on these groups was premised on the fact that the UK is made up of many publics that are differentially impacted by new technologies, and that some people and groups are frequently missing from existing research. Some applications of AI technologies bring up unique concerns and attitudes from those living with lower incomes,2 minoritised ethnic groups3 and those with fewer digital skills.4 Typically, it is difficult to obtain large enough sample sizes for subgroup analysis for these populations in a nationally representative survey alone, a limitation we note in the previous iteration of this survey. We therefore oversampled based on these subgroups, while recognising that people have intersecting identities and often belong to more than one identity group.
Our final sample consists of: 433 (12% of the overall sample) Asian or Asian British participants, 198 (6% of the overall sample) Black or Black British participants, 1,319 (38%) low-income participants and 962 (27%) low digital skills participants. Respondents were all over the age of 18. Unweighted, a total of 1,875 (53%) were female, 1,632 (46%) male, with no sex[1] [2] recorded for six participants. Unlike the previous wave[3] [4] , which looked at participants in England, Scotland and Wales, this survey covered participants across the four nations of the United Kingdom: 2,937 (84%) were from England, 156 (4%) from Northern Ireland, 255 (7%) from Scotland and 165 (5%) from Wales. These proportions are broadly in line with UK population estimates across the four nations.5
The data was weighted based on official statistics to match the demographic profile of the UK population and to adjust for sampling probabilities used in the sampling process and non-response to this survey. All figures reported in this study, unless otherwise specified, are adjusted according to this weighting.
Survey
We told respondents that each of the technologies in this survey uses AI to varying degrees. We provided the following definition of AI:
‘Artificial intelligence (AI) is a term that describes the use of computers and digital technology to perform complex tasks commonly thought to require human reasoning and logic. AI systems typically analyse large amounts of data to draw insights or patterns and achieve specific goals. They can sometimes take actions autonomously, that is without human direction. These systems can also be used to generate content like text, images, music or videos.’
The survey first covered general attitudes to new technologies, the ability to carry out certain digital tasks, and access to a smartphone and mobile data. We then asked about awareness of specific uses of AI, experiences with select uses, how beneficial and concerning respondents perceived each use of AI to be, and the key risks and benefits they associated with each. We also measured preferences around the governance and regulation of AI technologies, including different elements of decision making, safety, and data sharing and representation.
The specific technologies we asked about were:
- Risk and eligibility assessment technologies and facial recognition:
- Facial recognition for policing and surveillance
- Assessing eligibility for welfare benefits
- Assessing risk of cancer from a scan
- Assessing risk of repaying a loan
- LLMs and mental health chatbots
- General-purpose LLMs
- Mental health chatbots
- Robotics
- Robotic care assistants
- Driverless cars
Respondents were asked about all of the above AI use cases. The survey questions can be accessed in our technical report.
Analysis
We analysed the data between December 2024 and March 2025, using descriptive analyses for all survey variables followed up with two-proportion z‑tests for different demographic groups. We then used regression analyses to understand relationships between demographic and attitudinal variables, and perceived benefit of specific technologies (see Appendix section ‘Predictors of net benefit scores for each technology’ for further information).
We analysed the data using the statistical programming language R, and used a 95% confidence level to assess statistically significant results. Analysis scripts and the full survey dataset can be accessed on the Ada Lovelace Institute’s GitHub site.
In this report, we generalise from a nationally representative sample of the UK population to refer to the ‘UK public’. This does not refer to UK nationals, but rather people living in the UK at the time the survey was conducted.
Limitations
We recognise that our approach has limitations that need to be considered when interpreting our findings. In line with the objectives of the Public Voices in AI programme, and recognising that minoritised groups are often missing from nationally representative survey data, we set out to design a survey that provided information about these groups.
We were limited in terms of the sample we were able to access, and this limitation is representative of the broader ecosystem of UK survey providers. We assessed nine separate providers for their ability to deliver a survey that would provide enhanced information about minoritised groups and chose the provider who was best able – through an existing recruitment panel – to access sufficient numbers of people from different backgrounds and offer robust sampling methodologies.
Based on sample availability within the recruitment panel we used, we were able to boost to some extent the number of people from minoritised ethnic backgrounds, on lower incomes and with fewer digital skills. However, we were only able to boost sample sizes for Asian/Asian British and Black/Black British populations sufficiently to enable some sub-group analysis. This limitation was due to both sample availability and budget.
While these sample sizes are greater than those in other surveys, they do not represent the diversity of the UK population. Where they do represent specific minoritised groups, they are broad categories that obscure the diversity within these groupings, for example, Bangladeshi Asian, Pakistani Asian or Chinese Asian experiences. We recognise that using broad ethnicity groupings can risk homogenising the experiences of distinct communities, but we were not able to produce a sufficient sample to accurately represent the attitudes of those people and communities.
Furthermore, while our target was to achieve at least 380 responses for each minoritised ethnic group to enable sub-group analysis, we were only able to achieve this for the Asian/Asian British demographic group. This was due to limitations of the recruitment panel, which had only 353 Black/Black British people registered. Through issuing the survey to the entire population of Black/Black British people, we were able to achieve a sample of 198 Black/Black British respondents. These factors reflect the challenge in surveying minoritised groups, despite intent to design surveys for equity and inclusion and produce data that relates to these groups.
To avoid limiting sample sizes further, we opted to avoid any routing in the survey, meaning all participants saw all of the questions. This meant we had to make trade-offs in terms of survey coverage and we were unable to explore as wide a range of AI applications as we had previously in our 2022/23 study. It also meant we were unable to explore in depth experiences of specific technologies.
Due to limited panel information on key factors impacting digital exclusion, such as access to digital goods, affordability and digital skills, we had to adopt a narrow definition of digital exclusion in our sample. Recruitment was based on a basic measure of internet access, including individuals with no internet access, those who reported using the internet less than once a week, and those who used it weekly but either participated more in phone surveys than online or had not provided an email address.
Our analysis of this group considered a more comprehensive measure of digital skills, as mentioned above. But because the sample composition itself did not fully capture the true extent of digital exclusion, it is likely that individuals with fewer digital skills are underrepresented in our findings, limiting the generalisability of insights into the digitally excluded population.
Although we recognise the importance of considering multiple and intersecting identities, the survey has not engaged deeply with intersectionality as a framework for analysis to understand how the intersection of multiple identities and systems of oppression may impact on experiences of, and attitudes towards, AI technologies. This was not only because of limitations in sampling minoritised groups and reaching targeted sample sizes, but also because our research questions focused primarily at a general population level, supplemented by exploratory subgroup analysis across sociodemographic factors. As such, our findings offer insight into distinct experiences of some communities at a broad level.
To overcome these limitations would require enhanced provision in the UK survey ecosystem, as well as sufficient resource to incentivise participation from a range of minoritised people and groups.
Finally, due to differences in sample composition between this survey and its first iteration, comparative results need to be interpreted with caution. This survey follows a cross-sectional design rather than a longitudinal one, meaning it examines different cross-sections of the public in each wave rather than surveying the same people over time. Moreover, the current survey iteration oversampled based on specific demographics, as detailed above. Weighting has been applied to make the sample representative of the UK public, and where comparisons across survey iterations have been made, they compare a nationally representative Great British public (Wave 1) with a nationally representative UK public (Wave 2) to enable as much comparability as possible. But these comparisons are indicative of trends, and not conclusive evidence of changes over time.
2023
See the technical report containing full details of the methodological approach including how we designed our questions for the study can be accessed separately.
Sample
The sample was drawn from the Kantar Public Voice random probability panel. This is a standing panel of people who have been recruited to take part in surveys using random sampling methods. At the time the survey was conducted, it had 24,673 active panel members who lived in Great Britain and aged 18 or over. This subset of panel members was grouped by sex/age group, highest educational level and region, before a systematic random sample was drawn.
We did fieldwork in late 2022, and issued the survey in three stages:
- a soft launch with a random subsample of 500 panel members
- a launch with the remainder of the main panel members and
- a final launch with reserve panel members
A total of 4,010 respondents completed the survey and passed standard data quality checks. The majority of respondents completed the questionnaire online, while 252 were interviewed by telephone either because they do not use the internet or because this was their preference.
Respondents were aged between 18 and 94. Unweighted, a total of 1,911 (48%) identified as male, and 2,096 (52%) as female, with no sex recorded for three participants.
The majority (3,544; 88%) of respondents were white; 261 (7%) were Asian or Asian British; 90 (2%) were Black, African, Caribbean or Black British; and 103 (3%) were mixed, multiple or other ethnicities; with no ethnicity recorded for 12 participants.
The data was weighted based on official statistics to match the demographic profile of the population (see technical report). However, with a sample size of 4,010, it is not possible to provide robust estimates of differences across minority ethnic groups, so these are not reported here.
Survey
We told respondents that the questions focus on people’s attitudes towards new technologies involving artificial intelligence (AI), and presented the following definition of AI to them:
AI is a term that describes the use of computers and digital technology to perform complex tasks commonly thought to require intelligence. AI systems typically analyse large amounts of data to take actions and achieve specific goals, sometimes autonomously (without human direction).
Respondents then answered some general questions about attitudes to new technologies and how confident they feel using computers for different tasks. They were then asked questions about their awareness of and experience with specific uses of AI; how beneficial and concerning they perceive each use to be; and about the key risks and benefits associated with each.
The specific technologies we asked about were:
- facial recognition (uses were unlocking a mobile phone or other device, border control, and in policing and surveillance)
- assessing eligibility (uses were for social welfare and for job applications)
- assessing risk (uses were risk of developing cancer from a scan and loan repayments)
- targeted online advertising (for consumer products and political adverts)
- virtual assistants (uses were smart speakers and healthcare chatbots)
- robotics (uses were robotic vacuum cleaners, robotic care assistants, driverless cars and autonomous weapons)
- simulations (uses were simulating the effects of climate change and virtual reality for educational purposes).
These 17 AI uses were chosen based on emerging policy priorities and increased usage in public life.
Read more about the descriptions of each use.
Read the technical report for information about our questionnaire design.
To keep the duration of the survey to an average of 20 minutes, we employed a modular questionnaire structure. Each person responded to questions about nine of the 17 different AI uses. All participants were asked about facial recognition for unlocking a mobile phone and then responded to one of the two remaining uses of facial recognition.
They were then asked about one of the two uses for the other technologies, other than robotics, for which there were four uses. For robotics, each participant considered either robotic vacuum cleaners or robotic care assistants, and then either driverless cars or autonomous weapons. After responding to questions for each specific AI use, participants answered three general questions about AI governance, regulation and explainability.
The survey was predominantly made up of close-ended questions, with respondents being asked to choose from a list of predetermined answers.
Analysis
We analysed the data between January 2023 and March 2023, using descriptive analyses for all survey variables followed-up with chi-square testing of differences across specific demographic groups. We then used regression analyses to understand relationships between demographic and attitudinal variables, and perceived benefit of specific technologies.
We analysed the data using the statistical programming language R, and used a 95% confidence level to assess statistically significant results. Analysis scripts and the full survey dataset can be accessed on the Ada Lovelace Institute GitHub site.
We refer to the ‘British public’ (sometimes shortened to ‘the public’) or ‘people in Britain’ (sometimes shortened to ‘people’) throughout — based on the representative sample of the population of Great Britain. This phrasing does not refer to British nationals, but rather to people living in Great Britain at the time the survey was conducted.
References
- Lloyds Bank, ‘UK Consumer Digital Index’ (2018) <https://www.lloydsbank.com/ass…https://www.lloydsbank.com/ass…> accessed 13 March 2025. Back
- Ada Lovelace Institute, ‘Access Denied? Socioeconomic Inequalities in Digital Health Services’ (2023) <https://www.adalovelaceinstitu…> accessed 3 February 2025. Back
- Ada Lovelace Institute, ‘The Citizens’ Biometrics Council’ (2021) <https://www.adalovelaceinstitu…> accessed 3 February 2025. Back
- Joseph Rowntree Foundation, ‘AI shifts the goalposts of digitial inclusion’ (JRF, February 2024) <https://www.jrf.org.uk/ai-for-…> accesed 13 March 2025. Back
- ‘Population Estimates for the UK, England, Wales, Scotland and Northern Ireland’ (Office for National Statistics, October 2024)<https://www.ons.gov.uk/peoplep…> accessed 13 March 2025. Back