Appendix
Predictors of net benefit scores for each technology
To understand how demographics and attitudinal variables are related to the perceived net benefits of AI, we fitted linear regression models for each individual AI technology using the same set of predictor variables. The dependent variable in each model is ‘net benefit’, calculated as described above. The independent variables in the models were:
- Age (65 and older compared to younger than 65)
- Sex (male compared to female)
- Education (having a degree compared to not having a degree)
- Awareness of the technology (aware compared to not aware)
- Digital Skills (has digital skills compared to does not have digital skills)
- Low Income
- Black/Black British ethnicity
- Asian/Asian British ethnicity
- Tech pace (self-reported informedness about pace of technology change)
- Tech impact (views about technology making society better or worse)
Figure 20 presents the results for all regressions in a single plot. Each square in the plot represents the expected change in net benefit for a unit increase in the corresponding independent variable on the vertical axis, controlling for all other variables included in the model.
Statistically significant coefficients (p < 0.05) are shown in red, while black coefficients denote non-significant coefficients. Coefficient estimates higher than 0 indicate a higher net benefit and conversely coefficients lower than 0 are associated with lower net benefit (or higher concern) on a particular variable.

Figure 20: Predictors of net benefit
Taking the age variable as an example, respondents aged 65 and over are significantly more likely than those under 65 to believe concerns outweigh benefits for LLMs and driverless cars, indicating greater scepticism towards these emerging technologies. In contrast, male respondents are significantly more likely than women to believe benefits outweigh concerns for nearly all technologies.
Those holding a graduate degree are significantly less likely than those who do not hold such qualifications to believe that benefits outweigh concerns for most AI technologies, except driverless cars and robotic care assistants. This might indicate that greater exposure to risks associated with AI may contribute to a more critical stance on its benefits. As mentioned previously, Black/Black British respondents are more likely than non-Black respondents to believe concerns outweigh benefits for most AI technologies, with the exceptions of LLMs and mental health chatbots. However, this was not statistically significant when controlling for other demographic variables. In contrast, being Asian/British Asian is significantly associated with believing benefits outweigh concerns for most AI applications, except facial recognition and cancer risk prediction.
When all other variables are held constant, those on low income still have significantly lower net benefit scores than those with a higher income for most technologies. This finding suggests experiencing low income may be linked with less acceptance of AI technologies. This could be due to concerns around accessibility, fairness and potential biases in decision making that could impact their lives – such as through determining if they are eligible for welfare benefits or loans. It presents a case for understanding in more detail the concerns those on low income have towards these technologies and whether and how these technologies can be designed to benefit them.
Being aware is strongly associated with believing benefits outweigh concerns across all AI applications, except for its use as facial recognition in policing and general-purpose LLMs. Perceptions regarding the pace and impact of technology on society shows a consistent relationship across technologies, with people who hold more positive views about technology changing society at a good pace, and making society better, being more likely to see net benefits across all eight AI uses.
Figure 20 illustrates how patterns of perceived net benefit vary substantially across demographic groups and attitudinal indicators.
The four UK nations may have differing preferences around AI governance
We conducted some exploratory analysis into regional differences in attitudes across the four UK nations: England, Northern Ireland, Scotland and Wales. Due to small sample sizes, we did not investigate whether differences were statistically significant across the four regions.
We found that Northern Ireland has a stronger preference for an independent oversight committee with citizen involvement than the other devolved nations. 48% of Northern Irish publics feel an oversight committee with citizen involvement should be responsible for ensuring AI is used safely compared to only 30% of English publics, 32% of Scottish publics and 32% of Welsh publics. This preference for citizen involvement may be linked with greater familiarity with public participation initiatives (e.g. citizens’ assemblies1) or less trust in other actors. In turn, they are less likely to select an independent regulator to perform this role than the other nations.
Scotland is more likely to want to place responsibility on international standards bodies than the other nations, with 42% selecting this option compared to 35% in England, 32% in Northern Ireland and 36% in Wales.
Wales places less responsibility on the companies developing AI technologies (50% choose this option), and more on independent scientists and researchers (37% choose this option) than the other devolved nations.
Figure 21 shows a nation-level breakdown of expectations and preferences around the governance of AI.
Specific benefits and concerns for each technology: full list
Table 8: Specific benefits
AI use |
Benefit |
Per cent (%) |
Predicting cancer risk from a scan |
Enable earlier detection of cancer, allowing earlier monitoring or treatment |
85% |
Be more accurate than a doctor at predicting the risk of developing cancer |
46% |
|
Reduce discrimination in healthcare |
32% |
|
Reduce human error in predicting risk of developing cancer |
64% |
|
Make personal information more safe and secure |
9% |
|
Something else (please specify) |
2% |
|
None of these |
3% |
|
Don’t know |
6% |
|
Prefer not to answer |
< 1% |
|
Assessing loan repayment risk |
Make applying for a loan faster and easier |
58% |
Be more accurate than banking professionals at predicting the risk of repaying a loan |
30% |
|
Be less likely than banking professionals to discriminate against some groups of people in society |
44% |
|
Save money usually spent on human resources |
36% |
|
Make personal information safe and secure |
9% |
|
Reduce human error in loan decisions |
41% |
|
Something else (please specify) |
2% |
|
None of these |
8% |
|
Don’t know |
11% |
|
Prefer not to answer |
< 1% |
|
Assessing welfare eligibility |
Be faster than welfare officers at determining eligibility for benefits |
52% |
Be more accurate than welfare officers at determining eligibility for welfare benefits |
23% |
|
Be less likely than welfare officers to discriminate against some groups of people in society |
39% |
|
Save money usually spent on human resources |
43% |
|
Make personal information more safe and secure |
13% |
|
Reduce human error in determining eligibility for benefits |
39% |
|
Something else (please specify) |
3% |
|
None of these |
10% |
|
Don’t know |
12% |
|
Prefer not to answer |
< 1% |
|
Facial recognition for police surveillance |
Make it faster and easier to identify wanted criminals and missing persons |
89% |
Be less likely than the police to discriminate against some groups of people in society when identifying criminal suspects |
66% |
|
Save money usually spent on human resources |
46% |
|
Make personal information more safe and secure |
51% |
|
Something else (please specify) |
23% |
|
None of these |
3% |
|
Don’t know |
2% |
|
Prefer not to answer |
3% |
|
Driverless cars |
Make travel by car easier |
32% |
Free up time to do other things while driving |
35% |
|
Drive with more accuracy than humans |
32% |
|
Be less likely to cause accidents than humans |
34% |
|
Make travel by car easier for some groups (e.g. disabled people or people who have difficulty driving) |
63% |
|
Save some money usually spent on human drivers |
25% |
|
Something else (please specify) |
3% |
|
None of these |
19% |
|
Don’t know |
6% |
|
Prefer not to answer |
0% |
|
Robotic care assistants |
Make caregiving tasks easier and faster |
48% |
Be more effective than caregiving professionals at tasks such as lifting patients out of bed |
37% |
|
Be less likely than caregiving professionals to discriminate against some groups of people in society |
37% |
|
Save money usually spent on human resources |
36% |
|
Something else (please specify) |
4% |
|
None of these |
14% |
|
Don’t know |
12% |
|
Prefer not to answer |
< 1% |
|
Large language models (LLMs) |
Serve as a resource for continuous learning and skill development |
50% |
Improve efficiency by automating repetitive tasks (e.g. writing emails) |
56% |
|
Enhance creativity by generating ideas |
38% |
|
Save money usually spent on human resources |
31% |
|
Something else (please specify) |
3% |
|
None of these |
9% |
|
Don’t know |
17% |
|
Prefer not to answer |
< 1% |
|
Mental health chatbots |
Serve as a faster way to get mental health support |
50% |
Be more accurate than a mental healthcare professional at suggesting treatment options |
7% |
|
Be less likely than mental healthcare professionals to discriminate against certain groups |
27% |
|
Save money usually spent on human resources |
28% |
|
Feel like interacting with a human, helping to prevent feelings of isolation |
33% |
|
Be useful for certain groups of people to use (e.g. those with mobility conditions) |
46% |
|
Something else (please specify) |
3% |
|
None of these |
15% |
|
Don’t know |
13% |
|
Prefer not to answer |
< 1% |
Table 9: Specific concerns
AI use |
Concern |
Per cent (%) |
Predicting cancer risk from a scan |
Be unreliable and cause delays to predicting a risk of cancer |
26% |
Be less accurate than a doctor at predicting the risk of developing cancer |
23% |
|
Be less effective for some groups of people in society, leading to more discrimination in healthcare |
21% |
|
Make it difficult to understand how decisions about potential health outcomes are reached |
41% |
|
Make it difficult to know who is responsible if a mistake is made |
50% |
|
Gather personal information which could be shared with third parties |
37% |
|
Make personal information less safe and secure |
25% |
|
Cause doctors to rely too heavily on it rather than their professional judgements |
64% |
|
Something else (please specify) |
3% |
|
None of these |
7% |
|
Don’t know |
6% |
|
Prefer not to answer |
< 1% |
|
Assessing loan repayment risk |
Be unreliable and cause delays to assessing loan applications |
23% |
Be less accurate than banking professionals at predicting the risk of repaying a loan |
25% |
|
Be more likely than banking professionals to discriminate against some groups of people in society |
16% |
|
Make it difficult to understand how decisions about loan applications are reached |
54% |
|
Make it difficult to know who is responsible if a mistake is made |
48% |
|
Gather personal information which could be shared with third parties |
47% |
|
Make personal information less safe and secure |
36% |
|
Lead to job cuts (for example, for trained banking professionals) |
42% |
|
Cause banking professionals to rely too heavily on the technology rather than their professional judgements |
57% |
|
Be less able than banking professionals to take account of individual circumstances |
59% |
|
Something else (please specify) |
3% |
|
None of these |
3% |
|
Don’t know |
7% |
|
Prefer not to answer |
0% |
|
Assessing welfare eligibility |
Cause delays to allocating welfare benefits |
17% |
Be less accurate than welfare officers at determining eligibility for welfare benefits |
35% |
|
Be more likely than welfare officers to discriminate against some groups of people in society |
14% |
|
Make it difficult to understand how decisions about allocating welfare benefits are reached |
54% |
|
Make it difficult to determine who is responsible if there is a mistake |
52% |
|
Gather personal information which could be shared with third parties |
47% |
|
Make personal information less safe and secure |
33% |
|
Lead to job cuts (for example, for trained welfare officers) |
45% |
|
Cause welfare officers to rely too heavily on it rather than their professional judgements |
60% |
|
Be less able than welfare officers to take account of individual circumstances |
60% |
|
Something else (please specify) |
4% |
|
None of these |
3% |
|
Don’t know |
8% |
|
Prefer not to answer |
0% |
|
Facial recognition for police surveillance |
Cause delays in identifying wanted criminals and missing persons |
7% |
Be less accurate than the police at identifying wanted criminals and missing persons |
13% |
|
Be more likely than the police to discriminate against some groups of people in society |
15% |
|
Lead to innocent people being wrongly accused if it makes a mistake |
54% |
|
Make it difficult to determine who is responsible if a mistake is made |
45% |
|
Gather personal information which could be shared with third parties |
56% |
|
Make personal information less safe and secure |
37% |
|
Lead to job cuts (for example, for trained police officers and staff) |
42% |
|
Cause the police to rely too heavily on it rather than their professional judgments |
57% |
|
Something else (please specify) |
4% |
|
None of these |
7% |
|
Don’t know |
4% |
|
Prefer not to answer |
0% |
|
Driverless cars |
Not always work, making the cars unreliable |
69% |
Make getting to places longer |
15% |
|
Not be as accurate or precise as humans |
43% |
|
Gather personal information which could be shared with third parties |
29% |
|
Be less effective for some groups of people in society than others |
26% |
|
Be difficult to use for some people |
45% |
|
Lead to job cuts (for example, for truck drivers, taxi drivers and delivery drivers) |
54% |
|
Make it difficult to know who is responsible if a mistake is made |
66% |
|
Make it more difficult to understand how the car makes decisions compared to a human driver |
57% |
|
Be more likely to cause accidents than human drivers |
42% |
|
Something else (please specify) |
5% |
|
None of these |
3% |
|
Don’t know |
3% |
|
Prefer not to answer |
< 1% |
|
Robotic care assistants |
Be unreliable and cause delays to urgent caregiving tasks |
40% |
Be less effective than caregiving professionals at tasks such as lifting patients out of bed |
42% |
|
Be less effective for some groups of people in society than others, leading to more discrimination |
26% |
|
Be unsafe as it could hurt people |
59% |
|
Make it difficult to know who is responsible for what went wrong if a mistake is made |
50% |
|
Gather personal information which could be shared with third parties |
28% |
|
Lead to job cuts (for example, for trained caregiving professionals) |
53% |
|
Cause patients to miss out on human interaction from human carers |
82% |
|
Something else (please specify) |
3% |
|
None of these |
2% |
|
Don’t know |
6% |
|
Prefer not to answer |
< 1% |
|
Large language models (LLMs) |
Reduce users’ own problem-solving skills or critical thinking abilities |
66% |
Harm the environment due to high energy consumption |
26% |
|
Be biased because of the data it is trained on |
50% |
|
Be used to generate offensive or harmful content |
47% |
|
Make it difficult to know who is responsible if a mistake is made |
46% |
|
Infringe on copyright because of the data it is trained on |
45% |
|
Lead to personal data being less secure and safe |
40% |
|
Lead to job cuts (for example, due to automating tasks) |
42% |
|
Something else (please specify) |
5% |
|
None of these |
3% |
|
Don’t know |
12% |
|
Prefer not to answer |
< 1% |
|
Mental health chatbot |
Be unreliable and cause delays to getting help |
37% |
Be less accurate at suggesting treatment options |
49% |
|
Provide misleading advice, potentially leading to harmful consequences |
62% |
|
Lead to discrimination against certain groups |
12% |
|
Make it difficult to understand how decisions are reached |
44% |
|
Make it difficult to know who is responsible if a mistake is made |
46% |
|
Lead to sensitive personal data being less secure and safe |
39% |
|
Lead to job cuts (for example, for trained mental healthcare professionals) |
41% |
|
Lead to isolation by replacing human to human interactions |
68% |
|
Make it unclear that people are not interacting with a human |
63% |
|
Be relied on too heavily by those using it |
57% |
|
Something else (please specify) |
4% |
|
None of these |
2% |
|
Don’t know |
8% |
|
Prefer not to answer |
< 1% |
Sample demographics
Table 10: Unweighted sample demographics
Demographic information |
Unweighted sample size |
|
Age |
18 |
73 |
25–34 yrs |
421 |
|
35–44 yrs |
596 |
|
45–54 yrs |
600 |
|
55–64 yrs |
635 |
|
65–74 yrs |
645 |
|
75+ yrs |
509 |
|
NA |
34 |
|
Digital skills |
Has digital skills2 |
2549 |
No digital skills |
962 |
|
NA |
2 |
|
Education |
Degree level qualification(s) |
1635 |
No qualifications |
401 |
|
Non-degree level qualifications |
1450 |
|
Other |
14 |
|
NA |
13 |
|
Ethnicity |
Asian or Asian British |
433 |
Black or Black British |
198 |
|
Mixed or multiple |
49 |
|
Other |
40 |
|
White British |
2515 |
|
White other |
221 |
|
NA |
57 |
|
Sex |
Female |
1875 |
Male |
1632 |
|
NA |
6 |
|
Digital access |
Mobile and data |
2998 |
Mobile, no data |
225 |
|
No mobile |
284 |
|
NA |
6 |
|
Household income |
Above £1,500 (equivalised) per month |
1965 |
£1,500 or less (equivalised) per month |
1319 |
|
NA |
229 |
References
- ‘Home’ (Citizens’ Assembly) <https://citizensassembly.ie/> accessed 13 March 2025. Back
- As per measure specified in: Lloyds Bank, ‘UK Consumer Digital Index’ (2018) <https://www.lloydsbank.com/assets/media/pdfs/banking_with_us/whats-happening/LB-Consumer-Digital-Index-2018-Report.pdf> accessed 13 March 2025. Back