Understanding, trust, and thoughts about regulation

Here, we asked people how important it is to be able to explain how AI works, and how important regulation is.

Many AI systems are used to make decisions faster and more accurately than a human would be able to. But this sometimes means it is not always possible to explain how the decision was made.

In terms of regulation, we asked people to tell us what type of regulation would make them more comfortable with AI technologies.

This image shows a houseplant against a neutral background. The scene is refracted in different ways by a fragmented glass grid.

Image credit: Alan Warburton / © BBC / Better Images of AI / Plant / CC-BY 4.0

Explainability

To understand how explainable the British public think a decision made by an AI system should be when explainability trades off with accuracy, we first informed participants that: Many AI systems are used with the aim of making decisions faster and more accurately than is possible for a human. However, it may not always be possible to explain to a person how an AI system made a decision.’ We then asked people which of the following statements best reflects their personal opinion:

  • Making the most accurate AI decision is more important than providing an explanation.

  • In some circumstances an explanation should be given, even if that makes the AI decision less accurate.

  • An explanation should always be given, even if that makes all AI decisions less accurate.

  • Humans, not computers, should always make the decisions and be able to explain them to the people affected.

Almost one third (31%) of respondents indicate that humans should always make the decisions – and be able to explain them

When there are trade-offs between the explainability and accuracy of AI technologies, the British public value the former over the latter: it is important for people to understand how decisions driven by AI are made.

Figure 6 shows that only 10% of the public feel that making the most accurate AI decision is more important than providing an explanation’, whereas a majority choose options that reflect a need for explaining decisions. 

Specifically, almost one third (31%) indicate that humans should always make the decisions (and be able to explain them), followed by 26% who think that sometimes an explanation should be given, even if it reduces accuracy’ and another 22% who choose an explanation should always be given, even if it reduces accuracy’. 

People’s preferences for explainable AI decisions dovetail with the importance of transparency and accountability demonstrated by people’s specific concerns about each technology. Here, for all technologies (except for driverless cars and virtual health assistants) the proportion of concerns mentioning it is unclear how decisions are made’ is higher than mentions of inaccuracy’.

Figure 6: The extent to which AI decisions should be explainable

Which statement do you feel best reflects your personal opinion?’ 

Which statement do you feel best reflects your personal opinion?’
Accuracy more important than explanation: 10%
Which statement do you feel best reflects your personal opinion?’
Sometimes explanations should be given, even if it reduces accuracy: 26%
Which statement do you feel best reflects your personal opinion?’
Don’t know/​prefer not to say: 11%
Which statement do you feel best reflects your personal opinion?’
Explanations should always be given, even if that makes all AI decisions less accurate: 22%
Which statement do you feel best reflects your personal opinion?’
Humans should always make the decisions and be able to explain them: 31%
Technology Accuracy more important than explanation Sometimes explanations should be given, even if it reduces accuracy Don’t know/​prefer not to say Explanations should always be given, even if that makes all AI decisions less accurate Humans should always make the decisions and be able to explain them
Which statement do you feel best reflects your personal opinion?’ 10% 26% 11% 22% 31%

People’s preferences for explainability over accuracy change across age groups.

Older people choose explainability and human involvement over accuracy to a greater extent than younger people. For those aged 18–44, sometimes an explanation should be given even if it reduces accuracy’ was the most popular response (Figure 7). At the youngest end of the age spectrum (18–24) humans should always make the decisions and be able to explain them’ is the least popular response, whereas this becomes the first choice from 45+ and above and the highest for respondents aged 65+. 

Figure 7: The extent to which AI decisions should be explainable split by age

Which statement do you feel best reflects your personal opinion?’ 

18–24 yrs
Accuracy more important than explanation: 8%
18–24 yrs
Sometimes explanations should be given, even if it reduces accuracy: 33%
18–24 yrs
Don’t know/​prefer not to say: 12%
18–24 yrs
Explanations should always be given even if that makes all AI decisions less accurate: 26%
18–24 yrs
Humans should always make the decisions and be able to explain them: 21%
25–34 yrs
Accuracy more important than explanation: 12%
25–34 yrs
Sometimes explanations should be given, even if it reduces accuracy: 34%
25–34 yrs
Don’t know/​prefer not to say: 12%
25–34 yrs
Explanations should always be given even if that makes all AI decisions less accurate: 20%
25–34 yrs
Humans should always make the decisions and be able to explain them: 22%
35–44 yrs
Accuracy more important than explanation: 10%
35–44 yrs
Sometimes explanations should be given, even if it reduces accuracy: 29%
35–44 yrs
Don’t know/​prefer not to say: 13%
35–44 yrs
Explanations should always be given even if that makes all AI decisions less accurate: 23%
35–44 yrs
Humans should always make the decisions and be able to explain them: 25%
45–54 yrs
Accuracy more important than explanation: 12%
45–54 yrs
Sometimes explanations should be given, even if it reduces accuracy: 25%
45–54 yrs
Don’t know/​prefer not to say: 10%
45–54 yrs
Explanations should always be given even if that makes all AI decisions less accurate: 22%
45–54 yrs
Humans should always make the decisions and be able to explain them: 31%
55–64 yrs
Accuracy more important than explanation: 9%
55–64 yrs
Sometimes explanations should be given, even if it reduces accuracy: 23%
55–64 yrs
Don’t know/​prefer not to say: 7%
55–64 yrs
Explanations should always be given even if that makes all AI decisions less accurate: 24%
55–64 yrs
Humans should always make the decisions and be able to explain them: 37%
65–74 yrs
Accuracy more important than explanation: 9%
65–74 yrs
Sometimes explanations should be given, even if it reduces accuracy: 20%
65–74 yrs
Don’t know/​prefer not to say: 6%
65–74 yrs
Explanations should always be given even if that makes all AI decisions less accurate: 20%
65–74 yrs
Humans should always make the decisions and be able to explain them: 45%
75+ yrs
Accuracy more important than explanation: 11%
75+ yrs
Sometimes explanations should be given, even if it reduces accuracy: 18%
75+ yrs
Don’t know/​prefer not to say: 10%
75+ yrs
Explanations should always be given even if that makes all AI decisions less accurate: 20%
75+ yrs
Humans should always make the decisions and be able to explain them: 41%
Technology Accuracy more important than explanation Sometimes explanations should be given, even if it reduces accuracy Don’t know/​prefer not to say Explanations should always be given even if that makes all AI decisions less accurate Humans should always make the decisions and be able to explain them
18–24 yrs 8% 33% 12% 26% 21%
25–34 yrs 12% 34% 12% 20% 22%
35–44 yrs 10% 29% 13% 23% 25%
45–54 yrs 12% 25% 10% 22% 31%
55–64 yrs 9% 23% 7% 24% 37%
65–74 yrs 9% 20% 6% 20% 45%
75+ yrs 11% 18% 10% 20% 41%

Governance and regulation

To find out about people’s views on the regulation of AI, we asked people to indicate what (if anything) would make them more comfortable with AI technologies being used. Participants could select as many they felt applied from a list of seven possible options

Public attitudes suggest a need for regulation that involves redress and the ability to contest AI-powered decisions.

People most commonly indicated that laws and regulations that prohibit certain uses of technologies and guide the use of all AI technologies’ would increase their comfort with the use of AI, with 62% in favour. People are also largely supportive of clear procedures for appealing to a human against an AI decision’ (selected by 59%). Adding to the concerns expressed about data security and accountability, 56% of the public want to make sure that personal information is kept safe and secure’ and 54% want clear explanations of how AI works’. 

Figure 8 shows the proportion of people selecting each option when asked what, if anything, would make them more comfortable with AI technologies being used. 

Figure 8: Increasing people’s comfort with the use of AI

Which of the following, if any, would make you more comfortable with AI technologies being used?’ 

Laws and regulations
62%
Procedures for appealing decisions
59%
Security of personal information
56%
Explanations on how AI decisions are made
54%
Monitoring to check for discrimination
53%
More human involvement
44%
Government regulator approval
38%
Don’t know/​prefer not to say
3%
Nothing
3%
Something else
1%
None of these
1%
Which of the following, if any, would make you more comfortable with AI technologies being used?’
Laws and regulations 62%
Procedures for appealing decisions 59%
Security of personal information 56%
Explanations on how AI decisions are made 54%
Monitoring to check for discrimination 53%
More human involvement 44%
Government regulator approval 38%
Don’t know/​prefer not to say 3%
Nothing 3%
Something else 1%
None of these 1%

We also asked participants who they think should be most responsible for ensuring AI is used safely from a list of seven potential actors. People could select up to two options.

The British public want regulation of AI technologies. An independent regulator’ is the most popular choice for governance of AI.

Figure 9 shows 41% of people feel that An independent regulator’ should be responsible for the governance of AI, the most popular choice of the seven presented. Patterns of preferred governance do not change notably depending on whether people feel well informed about new technologies or not.

Results add to a PublicFirst poll conducted in March 2023 with 2,000 UK adult respondents which found that 62% of respondents supported the creation of a new government regulatory agency, similar to the Medicines and Healthcare Products Regulatory Agency (MHRA), to regulate the use of new AI models.

Figure 9: Views on who should be responsible for ensuring AI is used safely

Who do you think should be most responsible for ensuring AI is used safely? (choose up to two options)’ 

Independent regulator
41%
Companies developing the AI technology
26%
Independent oversight committee
24%
International standards bodies
23%
Scientists and researchers
21%
The Government
20%
The people using the AI (e.g. companies)
16%
Don’t know/​prefer not to say
3%
No one should be responsible
<1%
Other (please specify)
<1%
All of the above
<1%
Technology
Independent regulator 41%
Companies developing the AI technology 26%
Independent oversight committee 24%
International standards bodies 23%
Scientists and researchers 21%
The Government 20%
The people using the AI (e.g. companies) 16%
Don’t know/​prefer not to say 3%
No one should be responsible 0%
Other (please specify) 0%
All of the above 0%

People’s preferences for the governance of AI changes across age groups. 

While people overall most commonly select an independent regulator’, Figure 10 shows 43% of 18–24-year-olds think that the companies developing the technology’ should be most responsible for ensuring AI is used safely. In contrast, only 17% of people over 55 select this option. 

This could reflect more in-depth experiences by young people with different technologies and associated risks, and therefore demands for more responsibility on developers. Especially since young people also report the highest exposure to technology driven problems such as online harms’. That 18–24-year-olds most commonly say that the companies developing the technologies should be responsible for ensuring AI is used safely raises questions about private companies’ corporate responsibility alongside regulation.

Figure 10: Views on who should be responsible for ensuring AI is used safely across age groups (split by age group)

Views on who should be responsible for ensuring AI is used safely across age groups (18–24 yrs)

18–24 yrs

Who do you think should be most responsible for ensuring AI is used safely? (choose up to two options)’ NB Options with less than 5% are not included 

An independent regulator
25%
Companies developing the AI technology
43%
Independent oversight committee
17%
International standards bodies
18%
Scientists and researchers
18%
The Government
28%
The people using the AI (e.g. companies)
17%
Technology
An independent regulator 25%
Companies developing the AI technology 43%
Independent oversight committee 17%
International standards bodies 18%
Scientists and researchers 18%
The Government 28%
The people using the AI (e.g. companies) 17%

Views on who should be responsible for ensuring AI is used safely across age groups (25–34 yrs)

25–34 yrs

Who do you think should be most responsible for ensuring AI is used safely? (choose up to two options)’ NB Options with less than 5% are not included 

An independent regulator
34%
Companies developing the AI technology
36%
Independent oversight committee
18%
International standards bodies
22%
Scientists and researchers
22%
The Government
20%
The people using the AI (e.g. companies)
18%
Technology
An independent regulator 34%
Companies developing the AI technology 36%
Independent oversight committee 18%
International standards bodies 22%
Scientists and researchers 22%
The Government 20%
The people using the AI (e.g. companies) 18%

Views on who should be responsible for ensuring AI is used safely across age groups (35–44 yrs)

35–44 yrs

Who do you think should be most responsible for ensuring AI is used safely? (choose up to two options)’ NB Options with less than 5% are not included 

An independent regulator
36%
Companies developing the AI technology
29%
Independent oversight committee
21%
International standards bodies
20%
Scientists and researchers
21%
The Government
24%
The people using the AI (e.g. companies)
17%
Technology
An independent regulator 36%
Companies developing the AI technology 29%
Independent oversight committee 21%
International standards bodies 20%
Scientists and researchers 21%
The Government 24%
The people using the AI (e.g. companies) 17%

Views on who should be responsible for ensuring AI is used safely across age groups (45–54 yrs)

45–54 yrs

Who do you think should be most responsible for ensuring AI is used safely? (choose up to two options)’ NB Options with less than 5% are not included 

An independent regulator
46%
Companies developing the AI technology
24%
Independent oversight committee
27%
International standards bodies
22%
Scientists and researchers
18%
The Government
19%
The people using the AI (e.g. companies)
18%
Technology
An independent regulator 46%
Companies developing the AI technology 24%
Independent oversight committee 27%
International standards bodies 22%
Scientists and researchers 18%
The Government 19%
The people using the AI (e.g. companies) 18%

Views on who should be responsible for ensuring AI is used safely across age groups (55–64 yrs)

55–64 yrs

Who do you think should be most responsible for ensuring AI is used safely? (choose up to two options)’ NB Options with less than 5% are not included 

An independent regulator
49%
Companies developing the AI technology
18%
Independent oversight committee
29%
International standards bodies
25%
Scientists and researchers
18%
The Government
17%
The people using the AI (e.g. companies)
15%
Technology
An independent regulator 49%
Companies developing the AI technology 18%
Independent oversight committee 29%
International standards bodies 25%
Scientists and researchers 18%
The Government 17%
The people using the AI (e.g. companies) 15%

Views on who should be responsible for ensuring AI is used safely across age groups (65–74 yrs)

65–74 yrs

Who do you think should be most responsible for ensuring AI is used safely? (choose up to two options)’ NB Options with less than 5% are not included 

An independent regulator
52%
Companies developing the AI technology
15%
Independent oversight committee
34%
International standards bodies
24%
Scientists and researchers
24%
The Government
15%
The people using the AI (e.g. companies)
13%
Technology
An independent regulator 52%
Companies developing the AI technology 15%
Independent oversight committee 34%
International standards bodies 24%
Scientists and researchers 24%
The Government 15%
The people using the AI (e.g. companies) 13%

Views on who should be responsible for ensuring AI is used safely across age groups (75+ yrs)

75+ yrs

Who do you think should be most responsible for ensuring AI is used safely? (choose up to two options)’ NB Options with less than 5% are not included 

An independent regulator
38%
Companies developing the AI technology
19%
Independent oversight committee
23%
International standards bodies
28%
Scientists and researchers
29%
The Government
20%
The people using the AI (e.g. companies)
11%
Technology
An independent regulator 38%
Companies developing the AI technology 19%
Independent oversight committee 23%
International standards bodies 28%
Scientists and researchers 29%
The Government 20%
The people using the AI (e.g. companies) 11%

To understand people’s concerns about who develops AI technologies, we asked people how concerned, if at all, they feel about different actors producing AI technologies. 

We asked this in the context of hospitals asking an outside organisation to produce AI technologies that predict the risk of developing cancer from a scan, and the Department for Work and Pensions (DWP) asking an outside organisation to produce AI technologies for assessing eligibility for welfare benefits. 

We asked people how concerned they are about each of the following groups producing AI in each context: 

  • private companies

  • not-for-profit organisations (e.g. charities)

  • another governmental body or department

  • universities/​academic researchers.

For both the use of AI in predicting cancer from a scan, and assessing eligibility for welfare benefits, the British public are most concerned by private companies developing the technologies and least concerned by universities and academic researchers developing the technologies

For the development of AI which may be used to assist the Department for Work and Pensions in assessing eligibility for welfare benefits, the public are most concerned about private companies developing the technology, with 66% being somewhat or very concerned. Just over half, 51%, of people are somewhat or very concerned about another governmental body or department developing the technology, and 46% somewhat or very concerned about not-for-profit organisations developing the technology. 

People are generally least concerned about universities or academic researchers developing this technology, with 43% being somewhat or very concerned. While this is the lowest percentage of concern compared to other stakeholders, this is still a sizable proportion of people expressing concern, which suggests the need for more trusted stakeholders to also be transparent about their role and approach to developing technologies. 

Regarding the development of AI that may help healthcare professionals predict the risk of cancer from a scan, there is a very similar pattern of concerns over who develops the technology. People are most concerned with private companies developing the technology with 61% being somewhat or very concerned, followed by a governmental body (44%). People are less concerned with not-for-profit organisations and universities or academic researchers developing the technology. Overall level of concern about developers was lower for technologies that predict risk of cancer than technologies which help assess eligibility for welfare.

Figure 11 shows the extent to which people feel concerned by the following actors developing new technologies to assess eligibility for welfare benefits and predict the risk of developing cancer: private companies, governmental bodies, not-for-profit organisations and universities/​academic researchers.

Figure 11a: Concern around who produces AI technologies to assess welfare eligibility

How concerned do you feel, if at all, about each of these groups producing new computer technologies for assessing eligibility for welfare?’ 

Private companies
Very: 31%
Private companies
Somewhat: 35%
Private companies
Don’t know/​prefer not to say: 8%
Private companies
Not very: 20%
Private companies
Not at all: 6%
A governmental body
Very: 14%
A governmental body
Somewhat: 37%
A governmental body
Don’t know/​prefer not to say: 8%
A governmental body
Not very: 31%
A governmental body
Not at all: 10%
Not-for-profit organisations
Very: 14%
Not-for-profit organisations
Somewhat: 32%
Not-for-profit organisations
Don’t know/​prefer not to say: 9%
Not-for-profit organisations
Not very: 36%
Not-for-profit organisations
Not at all: 9%
Universities/​academic researchers
Very: 12%
Universities/​academic researchers
Somewhat: 31%
Universities/​academic researchers
Don’t know/​prefer not to say: 9%
Universities/​academic researchers
Not very: 36%
Universities/​academic researchers
Not at all: 12%
Technology Very Somewhat Don’t know/​prefer not to say Not very Not at all
Private companies 31% 35% 8% 20% 6%
A governmental body 14% 37% 8% 31% 10%
Not-for-profit organisations 14% 32% 9% 36% 9%
Universities/​academic researchers 12% 31% 9% 36% 12%

Figure 11b: Concern around who produces AI technologies to predict risk of cancer

How concerned do you feel, if at all, about each of these groups producing new computer technologies for assessing risk of cancer?’ 

Private companies
Very: 25%
Private companies
Somewhat: 36%
Private companies
Don’t know/​prefer not to say: 6%
Private companies
Not very: 25%
Private companies
Not at all: 8%
A governmental body
Very: 12%
A governmental body
Somewhat: 32%
A governmental body
Don’t know/​prefer not to say: 8%
A governmental body
Not very: 35%
A governmental body
Not at all: 13%
Not-for-profit organisations
Very: 9%
Not-for-profit organisations
Somewhat: 26%
Not-for-profit organisations
Don’t know/​prefer not to say: 7%
Not-for-profit organisations
Not very: 42%
Not-for-profit organisations
Not at all: 16%
Universities/​academic researchers
Very: 6%
Universities/​academic researchers
Somewhat: 20%
Universities/​academic researchers
Don’t know/​prefer not to say: 6%
Universities/​academic researchers
Not very: 44%
Universities/​academic researchers
Not at all: 24%
Technology Very Somewhat Don’t know/​prefer not to say Not very Not at all
Private companies 25% 36% 6% 25% 8%
A governmental body 12% 32% 8% 35% 13%
Not-for-profit organisations 9% 26% 7% 42% 16%
Universities/​academic researchers 6% 20% 6% 44% 24%

While we asked about concerns over the development of a specific technology rather than overall trust, our findings resonate with results from the second wave of a CDEI survey on public attitudes towards AI, which found that on average, respondents most trusted the NHS and academic researchers to use data safely, while trust in government, big tech companies and social media companies was lower.