An important week for AI Governance and Safety
The impact of the US Executive Order on AI, the UK AI Safety Summit; and how parliaments can advance agreements made and address gaps.
There’s been a brief hiatus with this newsletter while I worked on a policy brief with the Westminster Foundation for Democracy to coincide with the UK AI Safety Summit. The summit was one of a number of important national and international initiatives on AI in the last week. This post looks at the progress that’s been made and where there are still gaps in progressing towards effective, people-centered agreements on AI governance and safety.
In the US, an Executive Order (EO) on “safe, secure and trustworthy” AI was announced on Monday 30 October. Stand out sections included concrete measures on AI safety and security, with requirements that AI developers report to the federal government on training activities and results of safety tests. To achieve this, the EO leverages the Defense Production Act – a Korean War-era law – emphasising that the Biden administration sees AI posing national security and public health and safety risks.
The EO addresses AI ‘deep fakes’, recommending measures such as digital signatures, watermarking, and other labelling techniques for AI-generated images, videos and audio. There is an emphasis on equity and civil rights, recognising risks of bias from AI systems deployed in areas including housing, credit, healthcare and the criminal justice system. It identifies AI as “dual-use” technology, acknowledging that it can be used by actors seeking to undermine and subvert democratic processes. Overall, running to 111 pages it represents a wide-ranging and comprehensive set of measures to address near and longer term risks from AI.
Gaps remain. The EO is strong on measures on government use of AI, but still primarily relies on the cooperation of companies developing advanced AI, building on the voluntary commitments announced in July. As such, it doesn’t establish the same type of licensing or compliance requirements envisaged under the EU’s AI Act. This means we’re still some way apart from coordination of AI regulations across democratic countries.
As with other initiatives in the US such as the AI Bill of Rights, the measures announced are mainly guidance and without legislative action by Congress enforcement will remain a gap. President Biden acknowledged that “we still need Congress to act” and while, for example, the EO is laudable in calling for measures to protect privacy, Bills to progress this have been stuck in a divided Congress.
The EO can be viewed as a mechanism to strengthen US leadership on AI, establishing the basis for global engagement on AI standards envisaged under the order. It was announced two days before the UK AI Safety Summit. This was unlikely to be a coincidence, a point reinforced with a speech on the even of the summit by US Vice-President Kamala Harris who stressed the need to look at the “full spectrum” of AI risks, including near-term risks such as bias, discrimination and misinformation which specifically threaten democracy. This was seen as a counterpoint to the AI Safety Summit’s focus on longer-term risks from the most advanced ‘frontier’ AI systems. VP Harris also emphasised the importance of enacting laws that respond to the changes brought about by AI and the need to hold tech companies accountable.
From 1-2 November, the UK’s AI Safety Summit had plenty of attention as the first high-level global meeting on AI safety. It brought together over 25 countries representing different political systems including, for instance, China and the UAE alongside the US and EU.
What was agreed? Attendees signed the ‘Bletchley Declaration’ communique, committing to address common concerns around AI, build a shared scientific understanding of AI risks and develop respective risk-based policies across countries. The communique stressed the need to address “the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection”. It emphasised that AI risks were “international in nature, and so are best addressed through international cooperation” and highlighted the importance of supporting developing countries to leverage AI to “support sustainable growth and address the development gap”.
Concrete measures announced at the Summit included:
Agreements between governments and AI developers on safety testing of AI models before and after deployment against national security, safety and societal risks. Six leading companies released their safety policies before the summit, a step in the right direction for transparency.
The establishment of an AI Safety Institute to examine, evaluate, and test new types of AI. The institute will “explore all risks” from AI and facilitate information sharing with policymakers, companies, academia, civil society and the public. It will also set out processes by which experts and the public can report AI harms. At the same time, the US last week announced its own safety institute, working in similar areas with a focus on identifying measures such as watermarking AI-generated content to help mitigate disinformation risks.
A programme of future summits, starting in South Korea and France, and the publication of a ‘state of science’ report before the next summit to provide evidence on risks from frontier AI and help inform international and domestic policy making.
The safety summit is an important step towards international cooperation on AI Safety with wide buy in from diverse countries. The measures announced make steps towards an international monitoring regime on AI development and will improve transparency about the impact of AI.
However, we are still some way from clear governance frameworks and accountability mechanisms. The communique indicated a continued reliance on voluntary agreements when talking about AI developers having “strong responsibility for ensuring the safety of these AI systems” and ‘encouraging’ industry to be transparent. Future summits will need more detail on how countries will cooperate around AI governance and safety, for instance how to enforce a moratorium on dangerous uses of AI. At the moment, the agreements represent principles rather than a roadmap, and there are a lot of international principles on AI.
This point was recognised in a call for an International AI Treaty in an open letter last week from leading AI experts, academics and civil society representatives. The proposals include setting global compute limits, establishing an international “safety laboratory”, a cooperative platform akin to a CERN for AI safety, and forming a commission to oversee treaty compliance.
Is this type of international consensus achievable? It is difficult to envisage in a climate of global competition around AI rules and standards and competing diplomatic initiatives. Earlier in October, China introduced its own Global AI Governance Initiative on the tenth anniversary of the Belt and Road initiative. While it displayed some international consensus around AI risks and misuse, it stressed “equal rights” to AI development across political systems (coming days after further restrictions on US chip exports to China), highlighted the need to protect against uses of AI for “intervening in other countries’ internal affairs” and promoted the role of developing countries in AI governance.
The recent moves towards agreement on AI governance and safety are important. How can parliaments help progress the agreements made and address the gaps?
Translate principles into enforceable laws with concrete accountability mechanisms. As the US case shows, without legislative action measures to address AI safety will be incomplete. Parliaments have the unique authority to legislate enforceable AI controls and integrate international agreements into national law. A report from the OECD in late October identifies certain actions parliaments are taking. For instance, in the parliament of Norway, the Liberal Party urged the government to assess how the Norwegian legal framework should be interpreted and applied to AI use and proposed the establishment of an Algorithmic Supervision Authority.
Establish democratic oversight of AI safety. Oversight through parliamentary debates, questions and committee inquiries helps shine a light on the impact of AI, builds public trust in democratic institutions to address AI safety and provides evidence to inform national policy and legislation. VP Harris posed a series of interesting questions that can act as a frame for democratic oversight of AI development:
Whose biases are being written into the code? Whose interests are being served? Who reaps the reward of speedy adoption? Who suffers the harms most acutely? Who will be hurt if something goes wrong? Who has been at the table?
For democratic oversight to be effective, parliaments need a similar high-quality of independent information and evidence-based understanding of AI risks as governments. For instance, as an independent body, the UK AI Safety Institute should also have a mandate to report to parliament. One of the duties of the Safety Institute is to establish mechanisms for public reporting on AI vulnerabilities and harms and parliaments should establish similar channels to provide ongoing evidence from the public, including those most at risk of marginalisation from AI deployment.
Parliaments also need access to AI expertise. As an example, in the US the Senate AI Insight Forum invites industry experts to engage with Senators on a variety of issues related to AI, including risks to elections and democracy. It would be very beneficial to parliaments worldwide if there were international forums providing similar advice. The US EO also identified the importance of the government identifying and hiring talent on AI and parliaments will need to consider how they can also specialise staff or access AI high-level understanding.
Engage and educate the public on AI. A recent Luminate/Survation poll in the UK demonstrated that over a quarter of the population of the UK (26%) were unaware of AI risks, pointing to the need for public education on AI – a neglected factor in the Bletchley Declaration. The AI Safety Summit mentioned safety testing against certain ‘societal risks’, and MPs are well-pleased to lead a public conversation that helps define such risks and informs the public on measures to help address them. Examples in the OECD report included Korea's multi-stakeholder ‘AI Ethics Policy Forum’ which aims to build a social consensus on how to build trust in AI and Chile’s ‘Participation Process on AI’ that provides the public with a voice on AI policy.
A future with safe and trustworthy AI will need a well-informed and engaged public, concrete and binding laws and participatory and inclusive oversight of AI development and deployment. Running alongside future AI Safety Summits should be dedicated international efforts to support the key role in parliaments in advancing this agenda.
Interesting to hear of the Korea literacy initiative. It would be good to know more about it.