Global Governance of AI
Why it’s needed. Initiatives underway and in the pipeline. How MPs can and should be central.
With progress on AI visible and increasing each month, the clamour for effective global governance grows louder. Why is it needed?
The potential international risks are stark. Without safeguards, emerging capabilities of AI in the hands of bad actors could enable cyberattacks, create bioweapons and automate disinformation campaigns across borders.
Many countries won’t have the resources, infrastructure or capacity to develop and take advantage of advanced AI. As a result, AI development may not address global needs and could exacerbate global inequality.
Advances in AI will disseminate globally, with states impacted by capabilities developed in other countries. All nations therefore will need a voice in the debate over how to govern AI in line with ethical standards and human rights.
Clearly global attention is required. However, deciding on the right approaches will take insight, expertise and collaboration.
The current landscape of global AI governance is very disparate with an array of initiatives. Two of particular relevance for parliaments are:
The Organisation for Economic Co-operation and Development (OECD) AI Principles. Developed in 2019, the OECD principles are based around human-centred values, fairness and the rule of law. They promote AI that is transparent, explainable, robust, secure, and safe, with accountability mechanisms in place. AI should also promote inclusive economic growth, sustainable development, and well-being. The principles were crafted by a multi-stakeholder group of 50 experts from government, industry, civil society, trade unions, the technical community and academia.
Why is it important? This was the first initiative endorsed by national governments. It is actionable, with recommendations for policy makers focusing on an enabling policy environment, R&D investment and building national capacity on AI. This is supported by a useful policy observatory that tracks and shares action taken at national level.
How can parliaments engage? The OECD’s Global Parliamentary Network has a special group on AI to foster dialogue, promote understanding and share effective practices between MPs worldwide, including on AI legislation.
UNESCO’s Recommendation on the Ethics of Artificial Intelligence was adopted in November 2021. It emphasises four core values and ten principles for a human-rights centred approach to AI. The recommendation identifies 11 policy areas to advance responsible AI development and UNESCO provide tools and support to help countries implement the recommendation.
Why is it important? UNESCO have helped progress towards global normative standards for AI based on human rights and ethics. As a United Nations initiative, it provides unparalleled convening power and engages countries across the global south. China and Russia, largely been excluded from Western AI ethics debates, have also signed the principles.
How can parliaments engage? The policy prescriptions stress the importance of national checks and balances. Governments are encouraged to report every four years on how they are implementing the recommendation. Parliaments can help ensure this process is democratic and inclusive and the report can provide a key tool for oversight duties around AI. UNESCO provides advice and capacity development for policymakers, and there are moves to expand support to lawmakers worldwide.
Other initiatives at the global level include:
The International Telecommunication Union (ITU)’s AI for Good initiative, which focuses more on developmental benefits of AI, identifying and helping to scale applications that can contribute to the SDGs. As another UN initiative, its relevance comes in supporting global discussion around beneficial AI.
The Global Partnership on AI (GPAI) was initiated by Canada and France in 2020. Based on the OECD principles, GPAI aims to foster a collaborative effort across 29 countries on AI research and global policy development. It has strong state-level support, although mainly amongst Western democracies. However, critics have noted that debate has been stifled by national interference, leaving the partnership with an unclear purpose and mandate.
At a summit in May 2023, G7 leaders initiated the Hiroshima process to help advance and harmonise AI policy, with a focus on generative AI, working together with OECD and GPAI.
What initiatives are coming down the line?
The first draft of an international treaty on AI is being finalised by the Council of Europe. Signatories will need to take steps to ensure that the development and use of AI respects human rights, democracy and the rule of law, for instance banning certain uses of facial recognition. It will still take considerable time to ratify the treaty and implement it in national law. The process also raises questions on global reach and inclusivity, despite the engagement of countries outside Europe such the US, Canada, Israel, Mexico and Japan in formulating the draft.
This November, the UK will hold a multi-stakeholder AI Safety Summit, aiming to foster international collaboration on AI safety and inform governance standards. It will bring together the EU, US, Japan and South Korea as jurisdictions with advanced AI industry, together with leading AI companies, civil society groups and experts. There is less clarity on the involvement of China, with the original intention for only ‘like-minded’ countries to attend. How this meshes with other initiatives covering similar ground is also less clear at present.
Beyond this landscape, policymakers, technologists and AI governance experts are increasingly calling for new international agencies or institutions for AI governance.
The G7 has proposed an International Panel on Artificial Intelligence to share research and knowledge on AI, akin to the UN’s Intergovernmental Panel on Climate Change (IPCC).
UK Prime Minister Rishi Sunak is suggested to be pushing for the UK to host a global watchdog on AI, with similarities to the International Atomic Energy Agency (IAEA).
There are calls to establish a CERN for AI, an international, publicly-funded research centre to help harness advanced AI models for the greater good.
We can expect challenges ahead. Firstly, reaching agreement on specific issues to tackle, including risks, is not straightforward. AI safety and AI ethics specialists often approach problems and solutions from distinct viewpoints. Should global agreements focus on averting catastrophic risks, setting global standards around equitable and sustainable AI use, expanding access to AI, or a combination? Each likely requires specialist approaches.
Secondly, establishing international consensus will be very difficult in a divided world. Comprehensive global governance will inevitably need to include China as a key AI player. Without global buy in, rival initiatives risk politicisation and entrenching divisions. There are potential warning signs in the current debate around internet governance, with China and Russia pushing for governments to be able to set their own regulations and standards.
Thirdly, the speed and unpredictability of AI development. Nuclear governance is often held up as a gold standard and there are certainly lessons to learn from the success of the IAEA. However, it was established 11 years after Hiroshima and Nagasaki and there are challenges from advanced AI, such as potential to create bioweapons, that mean we likely don’t have this type of time.
If there is to be new global architecture, research points to four key functions:
Building consensus on AI opportunities and risks
Setting international norms and standards to manage global threats from AI, helping domesticate these in national regulation and monitoring compliance
Developing and distributing advanced AI to help spread benefits globally; and
Accelerating AI safety research
There is cause for optimism. Precedents for countries to come together in the face of global risks exist, including the Treaty on the Non-Proliferation of Nuclear Weapons, Montreal Protocol to protect the ozone layer, Biological Weapons Convention, and the Paris Agreement on climate change.
Whatever model develops, it will need guidance and buy-in from governments, ethicists, large technology companies, non-profits, academia and society at large. Democratic institutions and our elected representatives should be front and centre. They can contribute in various ways:
Advocating for governments to join global AI initiatives and participating in parliamentary groups around these initiatives.
Setting up or advocating for global and regional parliamentary bodies on AI governance. ParlAmericas provides a good example.
Building alliances with MPs across countries to share information and coordinate, pushing a stronger parliamentary voice in AI governance.
Helping monitor compliance of international agreements, eg. with UNESCO principles; and scrutinising building domestic capacity to implement global agreements.
Using oversight and representation work to provide evidence to inform global agreements – assessing the impact AI is having across society and tabulating reports of harms or potential risks.
Bringing the voice of the public and marginalised groups in their countries into debates on responsible use of AI.
Using information gathered in international engagements to inform and educate the public on AI developments.