A summary of the key aspects of how the survey was run.
See the technical report for full details of the methodological approach including how we designed questions for the study.
The sample was drawn from the Kantar Public Voice random probability panel. This is a standing panel of people who have been recruited to take part in surveys using random sampling methods. At the time the survey was conducted, it had 24,673 active panel members who lived in Great Britain and aged 18 or over. This subset of panel members was grouped by sex/age group, highest educational level and region, before a systematic random sample was drawn.
We did fieldwork in late 2022, and issued the survey in three stages:
- a soft launch with a random subsample of 500 panel members
- a launch with the remainder of the main panel members and
- a final launch with reserve panel members
A total of 4,010 respondents completed the survey and passed standard data quality checks. The majority of respondents completed the questionnaire online, while 252 were interviewed by telephone either because they do not use the internet or because this was their preference.
Respondents were aged between 18 and 94. Unweighted, a total of 1,911 (48%) identified as male, and 2,096 (52%) as female, with no sex recorded for three participants.
The majority (3,544; 88%) of respondents were white; 261 (7%) were Asian or Asian British; 90 (2%) were Black, African, Caribbean or Black British; and 103 (3%) were mixed, multiple or other ethnicities; with no ethnicity recorded for 12 participants.
The data was weighted based on official statistics to match the demographic profile of the population (see technical report). However, with a sample size of 4,010, it is not possible to provide robust estimates of differences across minority ethnic groups, so these are not reported here.
We told respondents that the questions focus on people’s attitudes towards new technologies involving artificial intelligence (AI), and presented the following definition of AI to them:
AI is a term that describes the use of computers and digital technology to perform complex tasks commonly thought to require intelligence. AI systems typically analyse large amounts of data to take actions and achieve specific goals, sometimes autonomously (without human direction).
Respondents then answered some general questions about attitudes to new technologies and how confident they feel using computers for different tasks. They were then asked questions about their awareness of and experience with specific uses of AI; how beneficial and concerning they perceive each use to be; and about the key risks and benefits associated with each.
The specific technologies we asked about were:
- facial recognition (uses were unlocking a mobile phone or other device, border control, and in policing and surveillance)
- assessing eligibility (uses were for social welfare and for job applications)
- assessing risk (uses were risk of developing cancer from a scan and loan repayments)
- targeted online advertising (for consumer products and political adverts)
- virtual assistants (uses were smart speakers and healthcare chatbots)
- robotics (uses were robotic vacuum cleaners, robotic care assistants, driverless cars and autonomous weapons)
- simulations (uses were simulating the effects of climate change and virtual reality for educational purposes).
These 17 AI uses were chosen based on emerging policy priorities and increased usage in public life.
To keep the duration of the survey to an average of 20 minutes, we employed a modular questionnaire structure. Each person responded to questions about nine of the 17 different AI uses. All participants were asked about facial recognition for unlocking a mobile phone and then responded to one of the two remaining uses of facial recognition.
They were then asked about one of the two uses for the other technologies, other than robotics, for which there were four uses. For robotics, each participant considered either robotic vacuum cleaners or robotic care assistants, and then either driverless cars or autonomous weapons. After responding to questions for each specific AI use, participants answered three general questions about AI governance, regulation and explainability.
The survey was predominantly made up of close-ended questions, with respondents being asked to choose from a list of predetermined answers.
We analysed the data between January 2023 and March 2023, using descriptive analyses for all survey variables followed-up with chi-square testing of differences across specific demographic groups. We then used regression analyses to understand relationships between demographic and attitudinal variables, and perceived benefit of specific technologies.
We analysed the data using the statistical programming language R, and used a 95% confidence level to assess statistically significant results. Analysis scripts and the full survey dataset can be accessed on the Ada Lovelace Institute GitHub site.
We refer to the ‘British public’ (sometimes shortened to ‘the public’) or ‘people in Britain’ (sometimes shortened to ‘people’) throughout — based on the representative sample of the population of Great Britain. This phrasing does not refer to British nationals, but rather to people living in Great Britain at the time the survey was conducted.