AI and Parliament Digest #1
Generative AI turbocharging disinformation and affecting elections; Kenyan Parliament addresses harmful emerging technology; UK committee report on national and global AI governance.
#1 | 5 October 2023
Welcome to the first AI and Parliament Digest. So far posts on this Substack have covered different issues at the intersection of AI and parliamentary governance. Regular digests will provide the latest news and reflections from research and reports.
The aim is to provide readers with updated evidence on AI’s impact, understanding of emerging approaches to AI governance, and information on how international parliaments are taking action.
Generative AI and the public information space. Read more here.
A recent Freedom House report highlights how generative AI is boosting the spread of disinformation and propaganda worldwide. It identifies at least 47 governments worldwide – in both democracies and autocracies – using generative AI to manipulate public opinion and censor critics. For instance, in Venezuelan state media outlets spread pro-government messages through AI-generated videos on non-existent international English-language channels.
Last week also saw reports of deep faked audio released days before the election in Slovakia. Recordings of opposition leader Michal Šimečka were posted to Facebook discussing how to rig the election, including buying votes from the marginalised Roma community, and raising the price of beer. AFP’s fact-checking said the audio showed signs of AI manipulation. As a Polish citizen, I’m increasingly concerned about AI generated material impacting this month’s election.
AI’s impact: Generative AI’s ability to produce increasingly realistic text, audio and images is lowering barriers of entry for influence campaigns. It has driven down costs and allowed for more precise, subtle and personalised forms of disinformation. As AI generated content on the internet and social media becomes normalised, it will likely undermine trust in reliable news and fuel a ‘liars dividend’. Countries without robust and free media may struggle to push back the most.
What can MPs do? There is no quick fix. However, MPs can work across democracies to advocate for human-rights based standards for development and use of generative AI by state and non-state actors. At home, they can use their public leadership role to shine a light on the risks of manipulated information. They can help boost societal resilience through funding and advocating for fact checking, media literacy, public awareness and education campaigns. More here.
Also in the news:
News publishers have raised concerns about generative AI producing fabricated articles, warning that this may “pollute human knowledge” and “pose significant threats to the information ecosystem”.
For a bit of light relief, you can test yourself on identifying AI generated content (I won’t tell you my score 😉).
AI and the Global South. Read more here.
I was in Kenya until a few weeks ago and came across the case of Worldcoin, a cryptocurrency scheme criticised for its data collection practices and accused of violating Kenyan law. This week a parliamentary committee reported on the case, recommending Worldcoin be shut down. The report called on the government put in place comprehensive policies and oversight frameworks on virtual assets and their service providers, and for a review of legislation governing the collection of biometric data from Kenyans.
Also this week on Tech Policy Press, Bulelani Jili highlights the need for countries such as Kenya to update their regulatory frameworks to account for emerging challenges from AI, including data protection and cybercrimes acts.
Parliamentary committees worldwide can use their investigatory powers to shine a light on the introduction of inappropriate or harmful technologies. They can call on government to take action and follow up on policy responses; and can conduct post-legislative scrutiny to examine gaps in legal frameworks related to emerging technology.
Also in the news, despite claims of its ability to perform language translation, ChatGPT has been shown to work far less effectively in languages including Bengali, Swahili, Urdu, and Thai. Outputs “fabricated words, illogical answers and, in some cases, [produced] complete nonsense”. This demonstrated how AI can exacerbate a global digital divide owing to a lack of training data that represents global south populations.
This LSE blog also identifies the risk of global AI governance leaving behind countries in the global south, flagging a lack of access to AI technology and resources and an underrepresentation of diverse communities in training data.
MPs in the global south can help advocate for the need for high-quality local datasets, including this as a priority in national strategy and policy. In international forums, they can highlight cases when inappropriate or ineffective AI products have been introduced.
AI and the Role of Parliament. Read more here.
Some news from the UK this week. The Science, Innovation and Technology Committee of the House of Commons issued an interim report on the governance of artificial intelligence. The report identifies 12 ‘challenges of AI governance’, covering societal impacts, transparency issues and governance/regulatory challenges. While the UK government has already issued a white paper on AI regulation - adopting a pro-innovation approach and relying on sectoral regulation - the report goes further in calling for an AI Bill to be introduced, in part to position the UK as an AI governance leader.
At international level, the report raises tensions over the need to engage “as wide a range of countries as possible” while also providing a forum “for like-minded countries who share liberal, democratic values”. China is a key AI player, and I expect we’ll see this debate continue before the UK hosts an international AI Safety Summit in November.
The report has lessons for MPs worldwide in identifying how domestic legislation and international engagement works in tandem to address AI challenges. The importance of a clear national strategy backed up by AI-specific legislation comes through.
In other news:
Also in the UK, I was interested in the call from the Chair of the Foreign Affairs Committee Alicia Kearns to upskill MPs on AI and for those with AI expertise to enter parliament. How parliament and MPs are supported to understand and specialise in AI is a pressing issue. More here.
In late September, the Inter-Parliamentary Union held a Summit of Committees of the Future. Attendees from 70 parliaments discussed AI governance, with some calling for global regulatory frameworks similar to nuclear weapons regimes. Countries such as Finland have prominent committees with mandates to report on emerging technology. Across other parliaments, specialised parliamentary committees or sub-committees may need to take on a greater role in AI governance.
More topics are coming on the Substack along with these regular digests. Please reach out with ideas and feedback!