How can AI technologies be regulated?
AI technologies provide particular challenges to regulators, as they often cross over traditional regulatory boundaries of national borders, industry sectors and public and commercial use.
Debates about AI regulation often focus on two general principles: whether AI should be independently regulated or ‘self-regulated’.
When AI technologies are regulated by an independent regulator, the rules governing the use of AI technologies would be established by law, as legislation. The regulator is responsible for enforcing the law. Its role is to ensure those who deploy and develop AI technologies follow the rules.
An independent regulator can help ensure that the law is interpreted clearly and enforced coherently. As public institutions, regulators can also issue guidance to further clarify what companies and organisations should do to comply with the law.
Self-regulation means that a group of private AI companies agree to the rules and industry standards. In other words, AI developers and users design the rules, adopt them, monitor compliance and ensure that the rules are effective.
Self-regulation could be beneficial in some cases. In times of crises, like natural disasters and health emergencies, companies can coordinate to set up rules quickly. However, without independent checks, reports and audits it may be difficult to objectively assess whether companies are following the rules effectively or working for the benefit of people and society.