How is AI being regulated across the globe?
AI is starting to be regulated by individual countries and jurisdictions across the globe. There is currently no single, joined-up approach to AI regulation, although uses of AI often cross over national borders.
In the UK, the Government is proposing a ‘contextual, sector-based regulatory framework’, based on its existing network of regulators and laws, set out in the white paper: Establishing a pro-innovation approach to AI regulation. This means that regulation of AI will take account of the context and existing laws that relate to – for example — finance, or welfare at work.
The UK approach in the forthcoming Data Protection and Digital Information Bill is also likely to impact significantly on the governance of AI in the UK.
Outside the UK, many other countries are developing laws and regulations for controlling the use and development of AI.
The European Union has proposed a new piece of AI legislation: the AI Act. The Act aims to ensure that AI is safe and beneficial. The Act is likely to become law in 2024. This law employs a risk-based approach, putting AI applications into three categories:
Unacceptable risk: These are AI applications that could cause harm or encourage destructive behaviour. These applications are banned outright.
High-risk: These are AI applications in sensitive sectors like healthcare or transportation. They must adhere to strict requirements on transparency, oversight and accountability.
Low to minimal risk: For other AI applications, not explicitly considered high risk, the rules are less strict. However, they still need to ensure basic safety and user protection.
Canada, through its proposed Artificial Intelligence and Data Act, takes a similar approach to the European Union. However, Canada will not ban any AI applications, instead, the proposed law requires AI developers to prepare plans that minimise risk and improve transparency, ensuring AI applications respect anti-discrimination laws and are clear about their decision-making processes.
The USA has yet to establish nationwide AI regulation. However, the government has issued an AI Bill of Rights — a set of non-binding guidelines to promote safe and ethical AI use. These guidelines include better data privacy and protections against unfair decisions by AI systems. At the same time, individual states and cities are developing their own AI regulatory measures.
China has enacted many AI-relevant regulations since 2021, including a law for personal data protection and an ethical code for AI. Chinese laws grant users transparency rights, allowing them to know when they interact with AI-generated content, and the option to switch off AI recommendation services. Measures against ‘deepfakes’ — AI-generated realistic but fake content — are also in place. However, many of these laws only apply to private companies’ use of AI, and not the use of AI by the Chinese state.
Brazil’s Senate has put forward a draftAI regulation that has clear parallels to the approach of the EU AI Act. Other major economies like Japan, India and Australia have issued guidelines on AI but have not yet passed any AI-specific legislation.