Risk and eligibility assessments
- Specific benefits and concerns around different AI uses
- Risk and eligibility assessments
- Facial recognition
- Robotics
- Virtual assistants
- Targeted advertising online
- Simulations
We asked about the following uses of assessing eligibility and risk using AI: to calculate eligibility for jobs, to assess eligibility for welfare benefits, to predict the risk of developing cancer from a scan, and to predict the risk of not repaying a loan.
The public’s most commonly chosen benefit for risk and eligibility assessments is speed (for example, ‘applying for a loan will be faster and easier’)
Just under half, 43%, think speed is a benefit of using AI to assess eligibility for welfare benefits, 49% for job recruitment, and 52% for assessing risk of repaying a loan. An overwhelming majority of 82% think that earlier detection of cancer is a key advantage in using AI to predict the risk of cancer from a scan, a consensus not reached in any other technologies.
In addition to speed, reduction of human bias and error are seen as key benefits of technologies in this group.
For the use of AI in recruiting for jobs and for assessing risk of repaying a loan, the technologies being less likely than humans to ‘discriminate against some groups of people in society’ is the second most commonly selected benefit, selected by 41% and 39% respectively.
Reduction in ‘human error’ is the second most commonly selected benefit for the use of AI in determining risk of cancer from scans and for assessing eligibility for welfare benefits, selected by 53% and 38% respectively.
The technologies being more accurate than human professionals overall, however, is not selected as a key benefit of most uses of AI in this group. Less than one third of people in Britain perceive this to be a key benefit for the use of AI in determining risk for the repayment of loans (29% selected), determining eligibility for welfare benefits (22% selected) and determining eligibility for jobs (13% selected).
An exception to this pattern is in the use of AI to determine risk of cancer from scans, where 42% of people perceive a key benefit as improved accuracy over professionals.
Table 1: Top three most commonly chosen benefits of using AI in risk and eligibility technologies
‘Which of the following, if any, are ways you think the use of this technology will be beneficial?’
Technology | Top three chosen benefits | Percentage |
---|---|---|
Assessing risk of cancer | 1 Earlier detection | 82% |
2 Less human error | 53% | |
3 More accurate | 42% | |
Assessing loan repayment risk | 1 Faster and easier | 52% |
2 Less likely to discriminate | 39% | |
3 Less human error | 37% | |
Assessing job eligibility | 1 Faster and easier | 49% |
2 Less likely to discriminate | 41% | |
3 Save money | 32% | |
Assessing welfare eligibility | 1 Faster and easier | 43% |
2 Less human error | 38% | |
3 Less likely to discriminate | 37% |
The most common concerns the British public have about using AI for these eligibility and risk assessments include the technology being less able than a human to account for individual circumstances, over-reliance on technologies over professional judgement, and a lack of transparency about how decisions are made
These concerns are particularly high in relation to the use of AI in job recruitment processes with 64% saying they think that professionals will ‘rely too heavily on their technology rather than their professional judgements’; 61% saying that the technology will be ‘less able than employers and recruiters to take account of individual circumstances’; and 52% saying that ‘it will be more difficult to understand how decisions about job application assessments are reached’.
These concerns add to findings from CDEI’s latest research into public expectations around AI governance, where people felt it was important to have a clear understanding of the criteria AI uses to make decisions in the case of job recruitment and to have the ability to challenge such decisions.
The British public express repeated concerns around a lack of human oversight in AI technologies, even for the use of AI to determine cancer risk from a scan – a technology that is seen as largely beneficial. As seen in the previous section, AI for predicting risk of cancer from a scan is perceived to be one of the most beneficial technologies in the survey.
Yet, over half of British adults (56%) still express concern about relying too heavily on this technology rather than professional judgements, while 47% are concerned that if the technology made a mistake it would be difficult to know who is responsible. These attitudes suggest that the public see value in human oversight in AI for cancer risk detection, even when this use of AI is perceived as largely positive.
Table 2: Most commonly selected concerns for risk and eligibility technologies
‘Which of the following, if any, are concerns that you have about the use of this technology?’
Technology | Top three chosen concerns | Percentage |
---|---|---|
Assessing risk of cancer | 1 Reliance on technology | 56% |
2 Accountability of mistakes | 47% | |
3 How decisions are reached | 32% | |
Assessing loan repayment risk | 1 Overlooking individuality | 52% |
2 Reliance on technology | 51% | |
3 How decisions are reached | 49% | |
Assessing job eligibility | 1 Reliance on technology | 64% |
2 Overlooking individuality | 61% | |
3 How decisions are reached | 52% | |
Assessing welfare eligibility | 1 Overlooking individuality | 55% |
2 Reliance on technology | 47% | |
3 Accountability of mistakes | 47% |