Key findings

We asked people about their awareness of, experience with and attitudes towards different uses of AI. This included asking people what they believe are the key advantages and disadvantages, and how they would like to see these technologies regulated and governed.

Image credit: Fritzchens Fritz / Better Images of AI / GPU shot etched 2 / CC

We asked people about the following uses of AI

Detailed definitions for each technology can be found in the Definitions of AI uses section.

For the majority of AI uses that we asked about, people had broadly positive views, but expressed concerns about some uses.

Many people think that several uses of AI are generally beneficial, particularly for technologies related to health, science and security. For 11 of the 17 AI uses we asked about, most people say they are somewhat or very beneficial. The use of AI for detecting the risk of cancer is seen as beneficial by nine in 10 people.

The public also express concern over some uses of AI. For six of the 17 uses, over 50% find them somewhat or very concerning. People are most concerned about advanced robotics such as driverless cars (72%) and autonomous weapons (71%).

72%

Nearly three-quarters of people find driverless cars to be a concerning use of AI

Digging deeper into people’s perceptions of AI shows that the British public hold highly nuanced views on the specific advantages and disadvantages associated with different uses of AI 

For example, while nine out of 10 British adults find the use of AI for cancer detection to be broadly beneficial, over half of British adults (56%) are concerned about relying too heavily on this technology rather than on professional judgements, and 47% are concerned about the difficulty in knowing who is responsible for mistakes when using this technology.

People most commonly think that speed, efficiency and improving accessibility are the main advantages of AI across a range of uses. For example, 70% feel speeding up processing at border control is a benefit of facial recognition technology.

While nine out of 10 British adults find the use of AI for cancer detection to be broadly beneficial, over half of British adults (56%) are concerned about relying too heavily on this technology rather than on professional judgements

However, people also note concerns relating to the potential for AI to replace professional judgements, not being able to account for individual circumstances, and a lack of transparency and accountability in decision-making. For example almost two thirds (64%) are concerned that workplaces will rely too heavily on AI for recruitment compared to professional judgements.

Additionally, for technologies like smart speakers and targeted social media advertisements, people are concerned about personal data being shared. Over half (57%) are concerned that smart speakers will gather personal information that could be shared with third parties while 68% are concerned about this for targeted social media adverts.

The public wants regulation of AI technologies, though this differs by age

The majority of people in Britain support regulation of AI. When asked what would make them more comfortable with AI, 62% said they would like to see laws and regulations guiding the use of AI technologies. In line with our findings showing concerns around accountability, 59% said that they would like clear procedures in place for appealing to a human against an AI decision.

When asked about who should be responsible for ensuring that AI is used safely, people most commonly choose an independent regulator, with 41% in favour. Support for this differs somewhat by age, with 18- to 24-year-olds most likely to say companies developing AI should be responsible for ensuring it is used safely (43% in favour), while only 17% of people aged over 55 support this.

43%

43% of 18 to 24 year olds favour companies developing AI be responsible for ensuring it is used safely, while only 17% of people aged over 55 support this.

People say it is important for them to understand how AI decisions are made, even if making a system explainable reduces its accuracy. For example, a complex system may be more accurate, but may therefore be more difficult to explain. When considering whether explainability is more or less important than accuracy, the most common response is that humans, not computers, should make ultimate decisions and be able to explain them (selected by 31%). 

This sentiment is expressed most strongly by people aged 45 and over. Younger adults (18–44) are more likely to say that an explanation should only be given in some circumstances, even if that reduces accuracy.

Taken together, this research makes an important contribution to what we know about public attitudes to AI and provides a detailed picture of the ways in which the British public perceive issues surrounding the many diverse applications of AI.

We hope that the research will be useful in helping researchers, developers and policymakers understand and respond to public expectations about the benefits and risks that these technologies may pose, as well as public demand for how these technologies should be governed.