AI Governance at the National Level
How are nations beginning to respond to the rise of AI, what regulatory tools are available, and how can MPs best contribute.
“AI’s unpredictable development, the rate of change and its ever-increasing power mean its arrival could present the most substantial policy challenge ever faced, for which the state’s existing approaches and channels are poorly configured”. Tony Blair Institute
The rise of AI has stirred global debate about how to realise its benefits and control risks. Governments around the world are moving at pace to understand the technology, debate the pros and cons of regulation and develop their responses. As parliamentarians will have an essential role to play in crafting, debating and passing legislation on AI, this post looks at emerging governance approaches at the national level and different regulatory tools that can be used.
Examples will be drawn from ‘early movers’ - the US, the EU and China. Their significance at international level will inevitably impact the global economy and the way in which AI is managed safely. However, MPs worldwide will have a key role to play as a mature system of AI governance will require coordination at the international level.
What is AI governance seeking to balance?
Promoting innovation and competition, allowing industries the space to grow while preventing stifling measures.
Protecting freedoms and fundamental rights, including safeguarding consumers and addressing liability for harms.
Ensuring transparency, including on how AI is trained, identifying when AI is used, and preventing unexplainable decisions and inequitable outcomes.
What are some challenges to conventional governance approaches?
Firstly, the speed and unpredictability of change poses problems for traditional lawmaking processes. For instance, few people predicted that generative AI would begin to automate creative industries so soon. New capabilities of AI systems can also arise unpredictably during and after deployment.
Secondly, AI’s complexity implies an asymmetry in knowledge and resources between democratic institutions, AI developers and technology companies, creating risks of regulatory blind spots or regulatory capture.
Thirdly, as a transformative technology, effective AI governance will need to go beyond legal and tech expertise and requires consideration of ethics and sociology. Achieving consensus on the right approach across these fields is no small feat.
Fourthly, how to focus regulation is not exactly clear. For example, where AI causes harm to the public, where in the supply chain does accountability fall? Where governments are clear about an area they want to regulate, this can be easier said than done. For example, bias is often embedded in the training data and very difficult to attribute.
With these challenges in mind, well-informed and up-to-date lawmakers and strong connections between democratic institutions and science, research, civil society and AI industry is going to be key.
What approaches are we beginning to see?
Soft law measures include frameworks and guidelines, voluntary codes of conduct, industry standards and best practices. While not legally enforceable, they can help to establish norms and standards for the responsible use of AI, including within specific sectors, and can help fill in and develop understanding of the technology before hard law is developed. This is the status quo option involving technology companies continuing to work and being trusted with AI safety.
Criticisms of soft law include the obvious lack of legal bite, with voluntary guidelines no substitute for legally binding national and international regulation. Self-regulation also is likely to be inadequate where competitive pressures disincentivize companies from acting responsibly.
Where is this happening?
In the US, the White House issued an AI Bill of Rights, principles on AI governance covering data transparency, privacy, independent auditing, risk assessment and monitoring.
In late July, the White House secured voluntary commitments from leading AI companies on security testing, sharing information on AI risks and vulnerabilities, implementing measures to identify AI-generated content and prioritising research on societal risks.
The Senate issued the SAFE policy framework, which addresses principles around AI safety, accountability and disclosure that can provide a framework for future regulation.
There have also been industry efforts:
The Google Secure AI Framework (SAIF) identifies security standards for the development and deployment of AI. Microsoft has prioritised thought leadership on AI security, disseminating advisories and guidelines.
Ten companies including OpenAI and TikTok signed up to guidelines on how to build, create, and share AI-generated content responsibly, including disclosing when the public encounters AI content.
In March, there was a call for a pause on large-scale AI development to ensure that ethical standards and safety was prioritised before new systems were available publicly.
Hard law - centralised, binding regulations and laws that govern the development, deployment and use of AI, often aligned with ethical boundaries. This raises the question of the balance between relying on, and updating, existing laws and the need for entirely new legislation.
Many risks around AI will be covered under existing legal frameworks, for instance stealing intellectual property and discrimination. Moreover, as AI is increasingly used across different fields such as healthcare, entertainment, finance and entertainment, customised regulation targeting how AI is being used in these sectors may be seen as more effective than one-size-fits-all regulation.
For MPs, examining areas of the legal code that may be violated by AI will be an important element of national AI governance. However, often legislation relies on human-centric concepts such as “intent,” “malice” or “recklessness” which may not accurately relate to potential AI harms.
Where is this happening?
The UK is adopting a sectoral approach to AI regulation and avoiding excessive new regulation. The focus is on leveraging existing regulatory frameworks, structures and expertise in areas such as human rights, health and safety, and competition and avoiding excessive new regulation.
In the US, federal agencies have begun to extend existing rules to AI. The Federal Trade Commission have opened investigations into AI companies around breaches of consumer protection law, and courts are examining issues such as liability for recommendation algorithms.
The alternative view is that the complexity and novelty of AI as a transformative technology, and the unprecedented ethical concerns, requires comprehensive legislation on AI new regulatory structures.
Where is this happening?
The EU’s AI Act - passed by the European Parliament and currently under tripartite discussions with the European Council and European Commission - is the global first piece of comprehensive AI regulation. It aims to balance economic demands with ethical considerations and emphasises the rights of AI users and citizens. The legislation identifies risks associated with the technology, banning unacceptable risks that exploit human vulnerabilities, manipulate behaviour or threaten privacy such as social scoring, mass surveillance and predictive policing. ‘High risk’ applications in sectors such as social welfare, criminal justice, and employment need to demonstrate safety, efficacy, privacy-compliance, transparency, and non-discrimination. For such uses, the Act sets up regulatory regimes for registration, documentation, logging of AI usage, and rigorous testing for accuracy, security and fairness. Penalties for non-compliance target companies’ global profits.
Outside of the EU, South Korea’s ‘Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI’ incorporates seven previous fragmentary pieces of AI legislation to provide an overall legal framework. The Act clarifies that anyone can develop new AI technology without government pre-approval. It defines “high-risk AI” which impacts human life and safety and actively requires the application of conditions of trustworthiness in these areas.
Where specific AI legislation is being put in place, there are different regulatory tools and legal measures available.
Licensing for AI would put it in line with other industries with potential societal harms such as nuclear power and pharmaceuticals. Options include licensing AI developers themselves, and specific licenses to train and deploy AI systems above certain thresholds. Licensing regimes can mandate companies to publish risk assessments, identify the capabilities of their systems and prove they are safe, secure and ethical. Downstream, it can cover AI product registration and approval.
A licensing regime is a good fit to help ensure transparency of AI development, to slow unchecked deployment and require diligence from developers. Licensing should also impose a monitoring regime with third party assessment and auditing, including re-certified AI systems if they were substantially modified.
Strict licensing regimes have been criticised for risking regulatory capture damaging competition. It also needs to be partnered with strong liability frameworks to avoid being a “superficial checkbox exercise”. Licensing also does not prevent development and deployment of AI by bad actors with access to AI models.
Consumer protection and liability frameworks ensure accountability and provide the public with recourse in cases of harm caused by AI. A liability framework means that AI developers and companies are held appropriately responsible if one of their models were, for example, to enable a cybersecurity breach or provide advice on developing bioweapons. As part of a governance regime, they incentivise investments in safety and discourage reckless deployment of poorly understood systems.
Difficulties include identifying clearly what constitutes AI-caused harm and how to gather the proof needed to make a claim. It is also complex to assess liability across different entities involved in the AI supply chain like developers, deployers and end users. Emerging capabilities of AI systems also make it difficult to predict future applications and challenges, making comprehensive liability frameworks difficult.
Where is this happening?
The 2022 EU AI Liability Proposal aims to provide legal clarity to cases where damage is caused by AI, addressing issues such as the proof needed for a successful liability claim.
In China, recent legislation holds generative AI developers responsible for outputs containing prohibited or illegal content.
As AI systems grow more powerful and present novel, complex risks, post-release auditing and oversight by independent bodies will be a key consideration. It provides the means for regulators to ensure compliance and helps incentivise AI developers to proactively implement safety measures. For this to be effective, there will need to be harmonised standards and the body tasked with auditing will need the right level of expertise and up-to-date information on AI developments. These are complex requirements and post-release auditing likely requires coordination with a global regime.
Disclosure requirements involve AI developers and companies being mandated to distribute information—including potential negative details – about AI products. This includes data inputs used to train AI systems to help analyse the assumptions and biases embedded in a system and identify use of copyright material. Post-development, it can cover transparency of AI use and tracking potential harms. Disclose requirements can cover informing the public when they are dealing with an AI system or interacting with AI-generated content, such as through watermarking.
Difficulties with disclosure requirements include identifying specific data points when foundation models are trained on vast amounts of unstructured data. Finding the right technical measure to disclose AI-generated outputs is also difficult, especially in the case of text.
Where is this happening?
Examples include California’s Bot Disclosure Law that requires chatbots to identify themselves, preventing deception.
In China, new legislation mandates labelling for AI-generated content.. Other legislation targets recommendation algorithms, with developers required to submit datasets used for training and safety reports.
Standard-setting bodies play an important role in ensuring consistency, quality, and safety in various industries. Supporting the above areas, they can help establish best practice by providing ethics guidelines, technical standards and safety protocols for AI developers. They can provide guidance and training, translating complicated regulations into practical measures and helping AI developers and companies comply. Collaborative efforts between lawmakers, AI experts and ethicists, industry, civil society and the public will be necessary to establish and evolve AI standards.
Where is this happening?
The International Organization for Standardization (ISO) has already developed standards for how companies should go about risk management and impact assessments and manage the development of AI.
In summary, MPs addressing the emerging field of AI regulation can:
Using their oversight role by providing evidence of the emerging impact of AI, including engaging with existing regulatory bodies. This can help outline where new legislation is required, or existing laws need to be revised, and the type of regulatory tools that will be needed.
Supporting a public conversation to help set the ethical principles around AI regulation. MPs will be key in voicing public concerns and needs and incorporating this into AI legislation.
Supporting multi-stakeholder processes to develop standards around AI. MPs will be central in ensuring that various actors across research, industry and civil society are involved in the discussion.
Contributing to the discussion on AI governance at the international level, bringing the voices of their constituents and different groups in society into the discussion on global AI governance.