Governance and regulation

These findings are from the latest survey (2025). Explore previous findings: 2023

This image shows a houseplant against a neutral background. The scene is refracted in different ways by a fragmented glass grid.

This section presents findings relevant to AI regulation and governance. We begin by examining the mechanisms that would make people more comfortable with the use of AI, followed by exploring three broad categories of public concerns with AI: decision making, safety, and data sharing and representation.

72%

The majority of the public (72%) indicated that laws and regulations would increase their comfort with AI technologies – an increase from 62% in 2022/23

Laws and regulation increase most people’s comfort with the use of AI

We asked respondents what, if anything, would make them more comfortable with the use of AI. Participants could select multiple options. Figure 10 presents a comparison between the 2022/23 and 2024/25 survey results, illustrating changes in public attitudes towards mechanisms that enhance comfort with AI technologies. 

The majority of the public (72%) indicated that laws and regulations would increase their comfort with AI technologies – an increase from 62% in 2022/23. The second most commonly selected mechanism was the ability to appeal AI-generated decisions (65%), highlighting a strong public desire for avenues of redress in AI-driven decision making.

Figure 10: Mechanisms for increasing comfort with AI

Which of the following would make you more comfortable with AI being used?’ 

2022/23
Laws and regulations
62%
Procedures for appealing decisions
59%
Information on how AI systems made a decision about you
<1%
Security of personal information
56%
Explanations of how AI systems make decisions in general
54%
Monitoring to check for discrimination
53%
More human involvement
44%
Government regulator approval
38%
Nothing
3%
Don’t know/​prefer not to say
3%
Something else
1%
None of these
1%
2024/25
Laws and regulations
72%
Procedures for appealing decisions
65%
Information on how AI systems made a decision about you
61%
Security of personal information
61%
Explanations of how AI systems make decisions in general
58%
Monitoring to check for discrimination
55%
More human involvement
55%
Government regulator approval
56%
Nothing
5%
Don’t know/​prefer not to say
4%
Something else
2%
None of these
1%
Technology
2022/23

2024/25
Laws and regulations 62% 72%
Procedures for appealing decisions 59% 65%
Information on how AI systems made a decision about you 61%
Security of personal information 56% 61%
Explanations of how AI systems make decisions in general 54% 58%
Monitoring to check for discrimination 53% 55%
More human involvement 44% 55%
Government regulator approval 38% 56%
Nothing 3% 5%
Don’t know/​prefer not to say 3% 4%
Something else 1% 2%
None of these 1% 1%
63%

Two-thirds of the UK public (63%) are not comfortable with AI systems making decisions that affect their lives

A majority of the public are uncomfortable with AI-based decision-making, preferring explainability over accuracy

As highlighted previously, there is some latent discomfort with AI-generated decisions in the absence of an appeal mechanism. To explore this further, we examined people’s comfort with AI-generated decisions that impact their lives, and the role that explainability of those decisions would play to allay their discomfort. 

Two-thirds of the UK public (63%) are not comfortable with AI systems making decisions that affect their lives. In particular, those with fewer digital skills and lower incomes are slightly more likely than the nationally representative sample to report discomfort with automated decision-making systems, at 69% and 68% respectively, with this difference being statistically significant. Figure 11 shows overall how comfortable people are with AI technologies being used to make decisions that affect their lives. 

Figure 11: Comfort with AI decision-making

Overall, how comfortable, or not, are you with AI technologies being used to make a decision that affects your life?’ 

Comfort
Very comfortable: 3%
Comfort
Somewhat comfortable: 29%
Comfort
Don’t know/​prefer not to say: 4%
Comfort
Not very comfortable: 41%
Comfort
Not at all comfortable: 22%
Technology Very comfortable Somewhat comfortable Don’t know/​prefer not to say Not very comfortable Not at all comfortable
Comfort 3% 29% 4% 41% 22%

This discomfort sits alongside a preference for explanations to accompany decisions that affect people. To understand how the public make trade-offs between explanations accompanying AI decisions and the accuracy of these decisions, we informed participants that: Many AI systems are used with the aim of making decisions faster and more accurately than is possible for a human. However, it may not always be possible to explain to a person how an AI system made a decision.’ They were then asked to consider a range of statements around automated decision making and trade-offs between accuracy and explanations to accompany those decisions.

The public have a strong preference for explanations over accuracy. 62% of people think an explanation should always accompany a decision, with 37% feeling humans, not computers, should be the ones making these decisions. Only 8% of people think accuracy is more important than providing an explanation when it comes to automated decision making by an AI system. These findings are consistent with the previous survey wave, suggesting little change in the last two years in preferences for explanations over greater accuracy in automated decision-making systems. Figure 12 shows the distribution of responses when considering trade-offs between accuracy and explanations.

People’s preferences for explainability over accuracy are different across age groups. Older people choose explainability and human involvement over accuracy to a greater extent than younger people. For those aged 18–34, sometimes an explanation should be given even if it reduces accuracy’ was the most popular response (Figure 12). In contrast, for those aged 55 and above, the most popular response was humans should always make the decisions and be able to explain them’. This difference is also consistent with findings from our previous survey wave.

Figure 12: Trade-offs in accuracy and explainability

Overall, which statement do you feel best reflects your personal opinion?’ 

Total
Accuracy is more important than providing an explanation: 8%
Total
Sometimes explanation should be given, even if that makes the AI decision less accurate: 23%
Total
Don’t know/​prefer not to say: 8%
Total
An explanation should always be given, even if that makes all AI decisions less accurate: 25%
Total
Humans, not computers, should always make the decisions and be able to explain them to the people affected: 37%
Age 18–24
Accuracy is more important than providing an explanation: 11%
Age 18–24
Sometimes explanation should be given, even if that makes the AI decision less accurate: 34%
Age 18–24
Don’t know/​prefer not to say: 5%
Age 18–24
An explanation should always be given, even if that makes all AI decisions less accurate: 18%
Age 18–24
Humans, not computers, should always make the decisions and be able to explain them to the people affected: 32%
Age 25–34
Accuracy is more important than providing an explanation: 9%
Age 25–34
Sometimes explanation should be given, even if that makes the AI decision less accurate: 29%
Age 25–34
Don’t know/​prefer not to say: 9%
Age 25–34
An explanation should always be given, even if that makes all AI decisions less accurate: 25%
Age 25–34
Humans, not computers, should always make the decisions and be able to explain them to the people affected: 28%
Age 35–44
Accuracy is more important than providing an explanation: 10%
Age 35–44
Sometimes explanation should be given, even if that makes the AI decision less accurate: 26%
Age 35–44
Don’t know/​prefer not to say: 6%
Age 35–44
An explanation should always be given, even if that makes all AI decisions less accurate: 25%
Age 35–44
Humans, not computers, should always make the decisions and be able to explain them to the people affected: 33%
Age 45–54
Accuracy is more important than providing an explanation: 9%
Age 45–54
Sometimes explanation should be given, even if that makes the AI decision less accurate: 22%
Age 45–54
Don’t know/​prefer not to say: 9%
Age 45–54
An explanation should always be given, even if that makes all AI decisions less accurate: 27%
Age 45–54
Humans, not computers, should always make the decisions and be able to explain them to the people affected: 34%
Age 55–64
Accuracy is more important than providing an explanation: 8%
Age 55–64
Sometimes explanation should be given, even if that makes the AI decision less accurate: 16%
Age 55–64
Don’t know/​prefer not to say: 7%
Age 55–64
An explanation should always be given, even if that makes all AI decisions less accurate: 24%
Age 55–64
Humans, not computers, should always make the decisions and be able to explain them to the people affected: 45%
Age 65–74
Accuracy is more important than providing an explanation: 6%
Age 65–74
Sometimes explanation should be given, even if that makes the AI decision less accurate: 17%
Age 65–74
Don’t know/​prefer not to say: 7%
Age 65–74
An explanation should always be given, even if that makes all AI decisions less accurate: 27%
Age 65–74
Humans, not computers, should always make the decisions and be able to explain them to the people affected: 43%
Age 75+
Accuracy is more important than providing an explanation: 8%
Age 75+
Sometimes explanation should be given, even if that makes the AI decision less accurate: 17%
Age 75+
Don’t know/​prefer not to say: 9%
Age 75+
An explanation should always be given, even if that makes all AI decisions less accurate: 22%
Age 75+
Humans, not computers, should always make the decisions and be able to explain them to the people affected: 44%
Technology Accuracy is more important than providing an explanation Sometimes explanation should be given, even if that makes the AI decision less accurate Don’t know/​prefer not to say An explanation should always be given, even if that makes all AI decisions less accurate Humans, not computers, should always make the decisions and be able to explain them to the people affected
Total 8% 23% 8% 25% 37%
Age 18–24 11% 34% 5% 18% 32%
Age 25–34 9% 29% 9% 25% 28%
Age 35–44 10% 26% 6% 25% 33%
Age 45–54 9% 22% 9% 27% 34%
Age 55–64 8% 16% 7% 24% 45%
Age 65–74 6% 17% 7% 27% 43%
Age 75+ 8% 17% 9% 22% 44%

The public have had high exposure to AI-generated harms and strongly support shared public-private (rather than private-only) responsibility for AI safety

We asked the public about their experiences of the following types of online harms that may have been AI-generated: financial fraud or scams, deepfake images or video clips, false information, and content that promotes violence, abuse or hate. Figure 13 shows self-reported personal exposure to these harms. 

On average, close to two-thirds of the UK public (67%) have experienced any form of harm a few times, while over a third (39%) have encountered any form of harm many times. Exposure to harm was highest for false information, with 61% of people having experienced this, followed by financial fraud (58%), deepfakes (58%), and content promoting violence, abuse or hate (39%). However, many individuals are unsure if the harms they encountered online were AI-generated, with at least 20% reporting this for each of the harms we surveyed. 

67%

Close to two-thirds of the UK public have experienced any form of AI-generated harm a few times

Exposure to these harms is associated with age. Individuals aged 18–24 were more likely than other age groups to report having experienced these harms, with 81% reporting exposure to false information and 85% having encountered deepfakes. In contrast, older age groups reported different patterns of exposure to these harms. Those aged 65–74 reported encountering financial frauds (57%) and false information (53%) more commonly than other AI-generated harms. 

Men were more likely to report encountering online harms that were potentially AI-generated than women and this is statistically significant across harms. This aligns with research on public exposure to AI-generated harms such as deepfakes.1 However, it is important to note that lower self-reported exposure among women could be due to their adaptive behaviours, including limiting their online engagement with others or limiting their own online engagement such as sharing photos or opinions online.2 Such proactive behaviours, which may be driven by heightened fears about becoming a target of online harms,2 might successfully reduce exposure to certain types of online harms, including AI-generated harms. However, as described in the limitations section, we do not have sufficient data to carry out detailed analysis. 

94%

An overwhelming majority (94%) of the UK public said that they were either very or somewhat concerned about the spread of AI-generated harms online

Figure 13: Experience of harms online

To what extent have you encountered the following types of harms online that might have been generated by AI?’ 

Financial frauds or scams
Many times: 23%
Financial frauds or scams
A few times: 35%
Financial frauds or scams
I am unsure if these were generated by AI: 25%
Financial frauds or scams
Never: 17%
False or misleading information
Many times: 25%
False or misleading information
A few times: 36%
False or misleading information
I am unsure if these were generated by AI: 25%
False or misleading information
Never: 14%
Deepfake image and/​or audiovisual clips
Many times: 19%
Deepfake image and/​or audiovisual clips
A few times: 39%
Deepfake image and/​or audiovisual clips
I am unsure if these were generated by AI: 20%
Deepfake image and/​or audiovisual clips
Never: 23%
Content that promotes violence, abuse or hate
Many times: 11%
Content that promotes violence, abuse or hate
A few times: 28%
Content that promotes violence, abuse or hate
I am unsure if these were generated by AI: 26%
Content that promotes violence, abuse or hate
Never: 35%
Technology Many times A few times I am unsure if these were generated by AI Never
Financial frauds or scams 23% 35% 25% 17%
False or misleading information 25% 36% 25% 14%
Deepfake image and/​or audiovisual clips 19% 39% 20% 23%
Content that promotes violence, abuse or hate 11% 28% 26% 35%

We also asked the public about the extent to which they were concerned about the spread of harmful AI-generated content online. Figure 14 shows self-reported concern about AI-generated harms. An overwhelming majority (94%) of the UK public said that they were either very or somewhat concerned about the spread of such harms online. Given these safety risks and concerns, it is important to understand public expectations around the regulation and governance of AI. We asked respondents who they think should be responsible for the safe use of AI and what specific powers they should have.

Figure 14: Self-reported concern about AI-generated harms

To what extent do you feel concerned, or not, about the spread of harmful AI-generated content online? 

Total
Very concerned: 58%
Total
Somewhat concerned: 36%
Total
Not very concerned: 5%
Total
Not at all concerned: 1%
Technology Very concerned Somewhat concerned Not very concerned Not at all concerned
Total 58% 36% 5% 1%

We asked the public about their expectations around the involvement of different stakeholders in AI safety. When asked who they think should be most responsible for ensuring AI is used safely, the majority of the UK public think an independent regulator (58%) and the companies developing AI technologies (58%) should be. Figure 15 shows expectations of responsibility. 

The preference for an independent regulator to be most responsible for ensuring AI is used safely increases with age, while preference for the companies developing AI decreases with age

Figure 15: Expectations of responsibility

Who do you think should be most responsible for ensuring AI is used safely? Choose up to three options.’ 

An independent regulator
58%
The companies developing the AI technology
58%
International standards bodies
36%
An independent oversight committee with citizen involvement
31%
Independent scientists and researchers
29%
The organisation/​institution using the AI (e.g. companies, public services)
25%
Central government ministers
21%
Don’t know/​prefer not to say
4%
Other
1%
No one should be responsible
<1%
Technology
An independent regulator 58%
The companies developing the AI technology 58%
International standards bodies 36%
An independent oversight committee with citizen involvement 31%
Independent scientists and researchers 29%
The organisation/​institution using the AI (e.g. companies, public services) 25%
Central government ministers 21%
Don’t know/​prefer not to say 4%
Other 1%
No one should be responsible 0

However, within this overall view, the preference for an independent regulator to be most responsible for ensuring AI is used safely increases with age, while preference for the companies developing AI decreases with age. Those aged 18–44 show a preference for companies over regulators, while those over the age of 55 have a preference for regulators over companies. This pattern of preference was similar to that found in our previous survey, suggesting age may continue to relate to expectations of, and potentially trust in, different organisations and institutions involved in AI development and deployment. Figure 16 shows responses across age groups to this question.

Figure 16: Expectations of responsibility by age group

Ages 18–24 and 25–34

Who do you think should be most responsible for ensuring AI is used safely? Choose up to three options’ 

18–24 yrs
The companies developing the AI technology
79%
An independent regulator
58%
International standards bodies
30%
An independent oversight committee with citizen involvement
25%
Independent scientists and researchers
23%
The organisation/​institution using the AI (e.g. companies, public services)
22%
Central government ministers
22%
25–34 yrs
The companies developing the AI technology
67%
An independent regulator
53%
International standards bodies
31%
An independent oversight committee with citizen involvement
27%
Independent scientists and researchers
24%
The organisation/​institution using the AI (e.g. companies, public services)
26%
Central government ministers
22%
Technology
18–24 yrs

25–34 yrs
The companies developing the AI technology 79% 67%
An independent regulator 58% 53%
International standards bodies 30% 31%
An independent oversight committee with citizen involvement 25% 27%
Independent scientists and researchers 23% 24%
The organisation/​institution using the AI (e.g. companies, public services) 22% 26%
Central government ministers 22% 22%

Ages 35–44 and 45–54

Who do you think should be most responsible for ensuring AI is used safely? Choose up to three options’ 

35–44 yrs
The companies developing the AI technology
66%
An independent regulator
53%
International standards bodies
33%
An independent oversight committee with citizen involvement
25%
Independent scientists and researchers
29%
The organisation/​institution using the AI (e.g. companies, public services)
27%
Central government ministers
24%
45–54 yrs
The companies developing the AI technology
59%
An independent regulator
62%
International standards bodies
37%
An independent oversight committee with citizen involvement
27%
Independent scientists and researchers
28%
The organisation/​institution using the AI (e.g. companies, public services)
25%
Central government ministers
20%
Technology
35–44 yrs

45–54 yrs
The companies developing the AI technology 66% 59%
An independent regulator 53% 62%
International standards bodies 33% 37%
An independent oversight committee with citizen involvement 25% 27%
Independent scientists and researchers 29% 28%
The organisation/​institution using the AI (e.g. companies, public services) 27% 25%
Central government ministers 24% 20%

Ages 55–64 and 65–74

Who do you think should be most responsible for ensuring AI is used safely? Choose up to three options’ 

55–64 yrs
The companies developing the AI technology
50%
An independent regulator
62%
International standards bodies
40%
An independent oversight committee with citizen involvement
32%
Independent scientists and researchers
27%
The organisation/​institution using the AI (e.g. companies, public services)
24%
Central government ministers
20%
65–74 yrs
The companies developing the AI technology
44%
An independent regulator
62%
International standards bodies
41%
An independent oversight committee with citizen involvement
42%
Independent scientists and researchers
31%
The organisation/​institution using the AI (e.g. companies, public services)
23%
Central government ministers
21%
Technology
55–64 yrs

65–74 yrs
The companies developing the AI technology 50% 44%
An independent regulator 62% 62%
International standards bodies 40% 41%
An independent oversight committee with citizen involvement 32% 42%
Independent scientists and researchers 27% 31%
The organisation/​institution using the AI (e.g. companies, public services) 24% 23%
Central government ministers 20% 21%

Age 75+

Who do you think should be most responsible for ensuring AI is used safely? Choose up to three options’ 

The companies developing the AI technology
47%
An independent regulator
61%
International standards bodies
38%
An independent oversight committee with citizen involvement
40%
Independent scientists and researchers
39%
The organisation/​institution using the AI (e.g. companies, public services)
23%
Central government ministers
17%
Technology
The companies developing the AI technology 47%
An independent regulator 61%
International standards bodies 38%
An independent oversight committee with citizen involvement 40%
Independent scientists and researchers 39%
The organisation/​institution using the AI (e.g. companies, public services) 23%
Central government ministers 17%
88%

88% of people believe it is important that the government or regulators have the power to stop the use of an AI product if it is deemed to pose a risk of serious harm to the public

We also noted some regional variation in stakeholder preferences. For example, Northern Ireland shows a stronger preference for an independent oversight committee with citizen involvement (48%) compared to other nations. Further details can be found in Appendix section The four UK nations may have differing preferences around AI governance’. However, due to small sample sizes by nation, we cannot investigate regional differences in more depth in this report. 

We asked the public how important it was to them that the government or independent regulators have a series of specific powers around the use of AI. These were the power to:

  • stop the use of an AI product if it causes harm
  • actively monitor the risks posed by AI systems
  • develop safety standards on AI use
  • access information about the safety of AI systems from developers.

These powers were chosen because they relate to live issues around AI regulation, which may be pertinent for the UK government’s potential development of a future AI bill. Currently, there are no legal requirements for AI developers or independent regulators to regularly test or monitor upstream AI foundation models for safety risks, or statutory powers that allow regulators to restrict the sale of AI products or services outside of narrow sectoral regulation such as that for medical devices.

The public feel strongly about the government or independent regulators, rather than private companies alone, having a suite of powers related to the use of AI. 88% of people believe it is important that the government or regulators – and not just private companies – have the power to stop the use of an AI product if it is deemed to pose a risk of serious harm to the public. Figure 17 shows responses across the powers. The low prevalence of don’t know / prefer not to say’ responses suggests that these preferences are widely held.

The public is concerned about data privacy, data sharing, and lack of representation in decisions about AI

When looking at the range of specific concerns people chose in relation to each AI technology, we found that more people report feeling concerned about the safety and security of their personal information across most uses of AI this year than in our previous survey wave. For example, this year 56% of people are concerned about facial recognition technologies in policing gathering and sharing their personal information with third parties. This figure was 38% in 2022/23. Similarly, 33% of people are concerned about the safety and security of their personal information in relation to AI technologies assessing welfare eligibility, with this figure at only 19% in our previous survey. Table 7 shows the prevalence of personal information concerns for uses of AI in 2022/23 and 2024/25.3

Table 7: Concerns around personal information

Technology
2022/23
1st
2nd
3rd
Assessing loan repayment risk
1st
Gather and share personal information :
37%
2nd
Personal information less safe and secure :
21%
Assessing risk of cancer
1st
Gather and share personal information :
24%
2nd
Personal information less safe and secure :
13%
Assessing welfare eligibility
1st
Gather and share personal information :
32%
2nd
Personal information less safe and secure :
19%
Driverless cars
1st
Gather and share personal information :
22%
Facial recognition for policing
1st
Gather and share personal information :
38%
2nd
Personal information less safe and secure :
21%
Robotic care assistants
1st
Gather and share personal information :
20%
Technology
2024/25
1st
2nd
3rd
Assessing loan repayment risk
1st
Gather and share personal information :
47%
2nd
Personal information less safe and secure :
36%
Assessing risk of cancer
1st
Gather and share personal information :
37%
2nd
Personal information less safe and secure :
25%
Assessing welfare eligibility
1st
Gather and share personal information :
47%
2nd
Personal information less safe and secure :
33%
Driverless cars
1st
Gather and share personal information :
29%
Facial recognition for policing
1st
Gather and share personal information :
56%
2nd
Personal information less safe and secure :
37%
Robotic care assistants
1st
Gather and share personal information :
28%
Technology 2022/23 Percentage
Assessing loan repayment risk 1 Gather and share personal information 37%
2 Personal information less safe and secure 21%
Assessing risk of cancer 1 Gather and share personal information 24%
2 Personal information less safe and secure 13%
Assessing welfare eligibility 1 Gather and share personal information 32%
2 Personal information less safe and secure 19%
Driverless cars 1 Gather and share personal information 22%
Facial recognition for policing 1 Gather and share personal information 38%
2 Personal information less safe and secure 21%
Robotic care assistants 1 Gather and share personal information 20%
Technology 2024/25 Percentage
Assessing loan repayment risk 1 Gather and share personal information 47%
2 Personal information less safe and secure 36%
Assessing risk of cancer 1 Gather and share personal information 37%
2 Personal information less safe and secure 25%
Assessing welfare eligibility 1 Gather and share personal information 47%
2 Personal information less safe and secure 33%
Driverless cars 1 Gather and share personal information 29%
Facial recognition for policing 1 Gather and share personal information 56%
2 Personal information less safe and secure 37%
Robotic care assistants 1 Gather and share personal information 28%

We asked the public about their views on data sharing in the public sector. Most (83%) are concerned by the idea of public sector bodies sharing data about them with private companies to train AI systems. This concern appears to be strongly held with very low proportions of the population feeling they did not know how concerned they were, if at all (3%), as shown in Figure 18. These concerns are important in the UK context where there have been explorations of anonymised NHS data sharing with private companies.4 

Figure 18: Attitudes to public-sector data sharing

How concerned do you feel, or not, about public-sector bodies sharing data about you with private companies?’ 

Total
Very concerned: 43%
Total
Somewhat concerned: 40%
Total
Don’t know/​prefer not to say: 3%
Total
Not very concerned: 11%
Total
Not at all concerned: 3%
Technology Very concerned Somewhat concerned Don’t know/​prefer not to say Not very concerned Not at all concerned
Total 43% 40% 3% 11% 3%

We also asked the public about the extent to which they felt their views and values are represented in current decisions being made about AI and how it affects their lives. Half of the UK public (50%) said that they do not feel represented in this decision making, while just over a quarter (27%) said they do. Not feeling represented increases with age, with 57% of people aged 65–74 feeling unrepresented, compared to 40% of people aged 18–24.

Figure 19 shows the nationally representative distribution of responses to this question alongside breakdowns by age. 

50%

Half of the UK public (50%) said that they do not feel represented in decisions being made about AI and how it affects their lives

Figure 19: Public voice in AI

I feel like my views and values are represented in current decisions being made about AI and how it will affect my life.’ 

Total
Agree/​strongly agree: 27%
Total
Disagree/​strongly disagree: 50%
Total
Don’t know/​prefer not to say: 24%
Age 18–24
Agree/​strongly agree: 36%
Age 18–24
Disagree/​strongly disagree: 40%
Age 18–24
Don’t know/​prefer not to say: 24%
Age 25–34
Agree/​strongly agree: 33%
Age 25–34
Disagree/​strongly disagree: 43%
Age 25–34
Don’t know/​prefer not to say: 24%
Age 35–44
Agree/​strongly agree: 31%
Age 35–44
Disagree/​strongly disagree: 48%
Age 35–44
Don’t know/​prefer not to say: 21%
Age 45–54
Agree/​strongly agree: 24%
Age 45–54
Disagree/​strongly disagree: 53%
Age 45–54
Don’t know/​prefer not to say: 23%
Age 55–64
Agree/​strongly agree: 24%
Age 55–64
Disagree/​strongly disagree: 52%
Age 55–64
Don’t know/​prefer not to say: 24%
Age 65–74
Agree/​strongly agree: 22%
Age 65–74
Disagree/​strongly disagree: 57%
Age 65–74
Don’t know/​prefer not to say: 21%
Age 75+
Agree/​strongly agree: 23%
Age 75+
Disagree/​strongly disagree: 45%
Age 75+
Don’t know/​prefer not to say: 32%
Technology Agree/​strongly agree Disagree/​strongly disagree Don’t know/​prefer not to say
Total 27% 50% 24%
Age 18–24 36% 40% 24%
Age 25–34 33% 43% 24%
Age 35–44 31% 48% 21%
Age 45–54 24% 53% 23%
Age 55–64 24% 52% 24%
Age 65–74 22% 57% 21%
Age 75+ 23% 45% 32%

Image credit: Alan Warburton / https://betterimagesofai.org / Image by BBC

References

  1. Tvesha Sippy and others, Behind the Deepfake: 8% Create; 90% Concerned. Surveying Public Exposure to and Perceptions of Deepfakes in the UK’ (arXiv, 7 July 2024) <http://arxiv.org/abs/2407.0552…> accessed 23 September 2024.  Back
  2. Francesca Stevens and others, Women Are Less Comfortable Expressing Opinions Online than Men and Report Heightened Fears for Safety: Surveying Gender Differences in Experiences of Online Harms’ (arXiv, 27 March 2024) <http://arxiv.org/abs/2403.1903…> accessed 13 March 2025.  Back
  3. Concern around the safety and security of personal information was not asked for robotic care assistants and driverless cars and is therefore omitted from the data.  Back
  4. Kiran Stacey and Dan Milmo, Ministers Mull Allowing Private Firms to Make Profit from NHS Data in AI Push’ The Guardian (13 January 2025) <https://www.theguardian.com/so…> accessed 13 March 2025.  Back
Next

Conclusion