How is AI being regulated across the globe?

AI is starting to be regulated by individual countries and jurisdictions across the globe. There is currently no single, joined-up approach to AI regulation, although uses of AI often cross over national borders.

The agreement on International Guiding Principles on Artificial Intelligence and the voluntary Code of Conduct for AI developers, decided by G7 countries under the Hiroshima AI process, were the first international commitments. During the first AI Safety Summit in 2023, international governments signed the Bletchley Declaration and set an agenda to address frontier AI risks. Following the summit, the UK and the USA were the first countries to establish AI safety institutes, which in the UK has since been renamed AI Security Institute.

In 2022, the UK government proposed a ​‘contextual, sector-based regulatory framework’ based on its existing network of regulators and laws. This means that regulation of AI will take account of the context and existing laws that relate to sectors in which AI may be used – for example, finance, or welfare at work. 

Since then, the 2024 Data Use and Access Bill is the only UK legislative development that is likely to have an impact on AI production and application. The bill is not specific to AI but could make it easier for organisations to process personal data, including for automated decision making or research into AI training. 

In 2024, the European Union adopted the AI Act to ensure that AI is safe and beneficial to society. This law employs a risk-based approach, diving AI applications into three categories:

  • Unacceptable risk: These are AI applications that could cause harm or encourage destructive behaviour. These applications are banned outright.
  • High-risk: These are AI applications in sensitive sectors like healthcare, transportation, or immigration. They must adhere to strict requirements on transparency, oversight and accountability. For example, responsible organisations must assess the risk of these applications discriminating people and reduce it.
  • Low to minimal risk: For other AI applications, the rules are less strict, but they still need to ensure user protection and basic safety.

In Canada, the debate around the Artificial Intelligence and Data Act (AIDA), proposed in 2022, is still pending. AIDA takes a risk-based approach but introduces obligations only for high-impact AI systems’. 

Canada will not ban any AI applications; instead, the proposed law requires AI developers to minimise risk and improve transparency and ensure AI applications respect anti-discrimination laws and are clear about their decision-making processes. 

The USA has yet to establish nationwide AI regulation. However, laws in other sectors (consumer protection, anti-discrimination, sectoral privacy law) new executive orders, guidance, and voluntary frameworks already have an effect on the use of AI in the country. 

In 2025, the new administration’s executive order on Removing Barriers to American Leadership in Artificial Intelligence has revoked many previous AI policies, including the executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This established a government-wide effort to guide the development of responsible artificial intelligence. All policies that governmental agencies have adopted under this order up to the end of 2024 are currently under review, which means that AI guidelines and voluntary standards are being rolled back.

Since 2021, China has enacted many regulations that are relevant to the use of AI, even if they are not specific to it. These include a law for personal data protection, a data security law, and an ethical code for AI.

Chinese laws grant users transparency rights – allowing people to know when they interact with AI-generated content – and the option to switch off AI recommendation services. National regulations also ensure measures against deepfakes’, prohibit price discrimination and bans the use of algorithms to influence opinion and target children with addictive products. 

However, many of these laws only apply to public-facing AI developed by private companies and not to the use of AI by the Chinese state. 

In June 2023, the highest legislative body in China, the National People’s Congress, included an Artificial Intelligence Law’ in its legislative plan. This means that AI regulation could be expected in 2025. 

In 2024, China’s National Technical Committee 260 on Cybersecurity Standardisation of Administration published a draft of the document Cybersecurity technology – Basic security requirements for generative artificial intelligence service’ to provide guidance on measures on generative AI. 

Brazil​’s Senate has approved a draft AI regulation that has clear parallels to the approach of the EU AI Act. The Bill, expected in 2025, is a comprehensive framework for AI development and use, and takes a risk management approach that respects fundamental rights. 

Other major economies like Japan, India and Australia have issued guidelines on AI but have not yet passed any AI-specific legislation.