Government agencies can use an AI-driven process called ‘risk-based verification’ to assess whether it is likely that welfare benefits applications are genuine or not. Government departments expect that investing in advanced analytics, including AI, will help it make savings of around £1.6 billion by 2030–31.
These AI tools use data about previous genuine and fraudulent claims as ‘training data’, and look for characteristics or features that indicate whether the current application is genuine or not.
When assessing new welfare applications, the AI tool checks data about previous fraudulent applications. The AI tool can allocate someone’s welfare benefit application to a low‑, medium- or high-risk category. If the new application has similar characteristics to the identified features of a fraudulent application, it will be placed in a high-risk category. Applications with not so many of these characteristics will be categorised as lower risk.
Benefits of this technology include the potential to simplify and automate decision-making and the potential to make decision making processes more efficient.
However, this kind of AI tool may reproduce existing discrimination in the welfare benefits application system. For example, a National Audit Office report warned about the risk that when looking for fraud, algorithms can be ‘biased towards selecting claims for review from certain vulnerable people or groups with protected characteristics’.