Dean Russell is the MP for Watford.
A week is a long time in politics, but in the world of Artificial Intelligence, the future can become a reality in just a matter of hours. Recently, I hosted an adjournment debate in Parliament on Consumer Protections: AI Scams.
It focused on exploring the risks around AI-supported crime and the growing risk of scams using AI voice clones and deep fakes. I raised a point about the potential dangers to democracy, sharing concerns that election interference could come via AI voice clones calling voters on election day to encourage them not to vote.
I feared that bad actors could use family and friends’ cloned voices to make calls and stall people from voting. Just a few hours later, US news channels shared that between 5,000 and 25,000 voters in the New Hampshire Primary had been called by Joe Biden and told not to vote; instead, Biden stated,’ Save your vote’ for later in the year. Of course, this wasn’t the President, but a convincing AI voice clone or ‘robocall’ as the media dubbed it,
Such interference could be damaging as a one-off, but it’s more terrifying when considering that a staggering 49 per cent of the world’s population will vote in national elections, including the US and UK, this year. Democracies should not underestimate the opportunity for bad actors to deploy AI,
Deepfakes and voice clones are the rising stars of AI, and rarely a day goes by without a headline that wouldn’t look out of place in an Isaac Asimov novel. In the US, this tipping point was recently reached when social media was set alight with explicit deepfakes of Taylor Swift. US Politicians were drawn into the debate, raising concerns about legislation’s role in protecting individuals from harm, combining the worlds of entertainment and politics.
Previously, it has been possible to identify the fakery. You may recall the easy-to-identify deepfake of President Zelensky recalling his troops in Ukraine. Similarly, the Hilary Clinton deepfake, in which she states, ‘I actually like Ron DeSantis (a Republican) a lot. He’s just the sort of guy this country needs,’ was also easily spotted.
The technology is improving rapidly and doesn’t necessarily need to be overly engineered. Both the voice clone of Kier Starmer during Labour’s conference to the false AI audio of Sadiq Khan suggesting the armistice should be moved were just seconds long but shared widely.
Debunking content raises issues for the press, too. AI is not just about fake content. Its mere existence opens up a whole new debate on ‘Fake News’ – ‘Deepfake News’ that plants seeds of doubt for real content. Remember the leaked audio clip before the 2019 election of Jonathan Ashworth criticising Jeremy Corbyn? That recording was covered widely by mainstream media then and fed into the broader concerns about Labour. Imagine if that audio were leaked today. Would we not see a rush of Corbynista labelling it a voice clone?
AI’s impact isn’t limited to organic content. Paid online advertising creates a high risk of highly targeted AI campaigns for specific groups, as with last year’s fake AI ads of Rishi Sunak that reached thousands via Facebook.
A University of Amsterdam study delved into the risk of targeting when assessing AI-generated disinformation’s impact on voter preferences. The researchers created a deepfake video of a politician offending his religious voter base. The findings highlighted that the religious Christian voters exposed to the deepfake video had more unfavourable attitudes toward the politician compared to those in the control group. The findings may not be surprising, but they highlight the risks of AI being highly targeted to specific demographics to create the most damage.
The phrase ‘Rumour bombing’ was coined by Carl Miller, a research director at the Centre for the Analysis of Social Media at Demos, to decribe cases in American battleground states where voters were bombarded with bogus social media messages, including ‘there’s a shooting at the voting booth so the roads are closed’ or ‘the lines are over six hours long so don’t come’. The risk of similarly constructed rumour bombs targeting key voter groups is a real risk in the UK. One of the biggest challenges to democracy is growing political apathy. It is not inconceivable that some groups may be looking for a good excuse not to vote on election day, a risk of great magnitude in marginal seats.
Until recently, the main concerns around AI were long-term, from the impact on jobs to fears of sentience, but election risks are much more pressing. A former director for civic integrity has stated that “Threats to elections are concrete”; the recent NCSC annual review 2023 highlighted the security challenges our democracy faces due to AI. The report pointed out the evolving landscape presents opportunities and efficiencies for our economy and society. We must also ensure our democratic institutions and traditions are well-prepared for this new phase in digital development.
What was once science fiction is a matter of fact. The Indonesian election has interwoven AI into the campaign with some surreal examples, from the deepfake resurrection of a long-dead dictator giving a political endorsement to a dancing AI cartoon of the leading candidate. Many would not have predicted such examples just months ago. The most significant challenge in predicting AI’s impact is that we don’t know what will happen next.
Thankfully, the tech firms are beginning to wake up to the threats. Technology giants are looking at creating a new industry “accord” to tackle “deceptive artificial intelligence election content” that is threatening democracy via AI interference.
The Government is also listening. My experiences engaging with the relevant Ministers have shown a deep understanding of our challenges. Westminster’s biggest hurdle is that the solution will take more than legislation. The issues lie at a peculiar intersection between politics, technology, and public education. Many will rightly argue that AI challenges are already covered by electoral law. Still, the rules were not designed to deal with the scale and reach that personalised AI could bring.
Historically, any attempts at election interference or vote rigging in the UK required a human element and this limited in scope and impact. With AI, millions of citizens could be contacted by a deepfake of a family member or the candidate calling them to convince them not to vote. Knowing they have been tricked after the result is announced will not give any citizen the right to correct their mistake. The only way to protect democracy from election interference is through prevention.
Alongside working closely with tech platforms and telecom providers, the most powerful tool to protect democracy will be education. An excellent example is the new anti-fraud campaign launched this week: ‘Stop! Think Fraud, the National Campaign Against Fraud’. We will need the same to warn and educate the public on AI risks ahead of the election.
In the long term, AI opens on many opportunities for society. But the risk of harm is more significant than ever. Bad actors will use this year’s elections to practice and improve their anti-democratic campaigns.
As the saying goes, ‘A lie can travel halfway around the world while the truth is still putting on its boots’. With AI interference in elections, the polls may be closed before the truth has even tied its laces. We may not have time to use the law to protect democracy against AI without risking unintended consequences.
However, partnering with the tech and telecoms industry combined with a high-profile national educational campaign could keep our democracy safe for now by stopping AI from getting far in the first place.
Dean Russell is the MP for Watford.
A week is a long time in politics, but in the world of Artificial Intelligence, the future can become a reality in just a matter of hours. Recently, I hosted an adjournment debate in Parliament on Consumer Protections: AI Scams.
It focused on exploring the risks around AI-supported crime and the growing risk of scams using AI voice clones and deep fakes. I raised a point about the potential dangers to democracy, sharing concerns that election interference could come via AI voice clones calling voters on election day to encourage them not to vote.
I feared that bad actors could use family and friends’ cloned voices to make calls and stall people from voting. Just a few hours later, US news channels shared that between 5,000 and 25,000 voters in the New Hampshire Primary had been called by Joe Biden and told not to vote; instead, Biden stated,’ Save your vote’ for later in the year. Of course, this wasn’t the President, but a convincing AI voice clone or ‘robocall’ as the media dubbed it,
Such interference could be damaging as a one-off, but it’s more terrifying when considering that a staggering 49 per cent of the world’s population will vote in national elections, including the US and UK, this year. Democracies should not underestimate the opportunity for bad actors to deploy AI,
Deepfakes and voice clones are the rising stars of AI, and rarely a day goes by without a headline that wouldn’t look out of place in an Isaac Asimov novel. In the US, this tipping point was recently reached when social media was set alight with explicit deepfakes of Taylor Swift. US Politicians were drawn into the debate, raising concerns about legislation’s role in protecting individuals from harm, combining the worlds of entertainment and politics.
Previously, it has been possible to identify the fakery. You may recall the easy-to-identify deepfake of President Zelensky recalling his troops in Ukraine. Similarly, the Hilary Clinton deepfake, in which she states, ‘I actually like Ron DeSantis (a Republican) a lot. He’s just the sort of guy this country needs,’ was also easily spotted.
The technology is improving rapidly and doesn’t necessarily need to be overly engineered. Both the voice clone of Kier Starmer during Labour’s conference to the false AI audio of Sadiq Khan suggesting the armistice should be moved were just seconds long but shared widely.
Debunking content raises issues for the press, too. AI is not just about fake content. Its mere existence opens up a whole new debate on ‘Fake News’ – ‘Deepfake News’ that plants seeds of doubt for real content. Remember the leaked audio clip before the 2019 election of Jonathan Ashworth criticising Jeremy Corbyn? That recording was covered widely by mainstream media then and fed into the broader concerns about Labour. Imagine if that audio were leaked today. Would we not see a rush of Corbynista labelling it a voice clone?
AI’s impact isn’t limited to organic content. Paid online advertising creates a high risk of highly targeted AI campaigns for specific groups, as with last year’s fake AI ads of Rishi Sunak that reached thousands via Facebook.
A University of Amsterdam study delved into the risk of targeting when assessing AI-generated disinformation’s impact on voter preferences. The researchers created a deepfake video of a politician offending his religious voter base. The findings highlighted that the religious Christian voters exposed to the deepfake video had more unfavourable attitudes toward the politician compared to those in the control group. The findings may not be surprising, but they highlight the risks of AI being highly targeted to specific demographics to create the most damage.
The phrase ‘Rumour bombing’ was coined by Carl Miller, a research director at the Centre for the Analysis of Social Media at Demos, to decribe cases in American battleground states where voters were bombarded with bogus social media messages, including ‘there’s a shooting at the voting booth so the roads are closed’ or ‘the lines are over six hours long so don’t come’. The risk of similarly constructed rumour bombs targeting key voter groups is a real risk in the UK. One of the biggest challenges to democracy is growing political apathy. It is not inconceivable that some groups may be looking for a good excuse not to vote on election day, a risk of great magnitude in marginal seats.
Until recently, the main concerns around AI were long-term, from the impact on jobs to fears of sentience, but election risks are much more pressing. A former director for civic integrity has stated that “Threats to elections are concrete”; the recent NCSC annual review 2023 highlighted the security challenges our democracy faces due to AI. The report pointed out the evolving landscape presents opportunities and efficiencies for our economy and society. We must also ensure our democratic institutions and traditions are well-prepared for this new phase in digital development.
What was once science fiction is a matter of fact. The Indonesian election has interwoven AI into the campaign with some surreal examples, from the deepfake resurrection of a long-dead dictator giving a political endorsement to a dancing AI cartoon of the leading candidate. Many would not have predicted such examples just months ago. The most significant challenge in predicting AI’s impact is that we don’t know what will happen next.
Thankfully, the tech firms are beginning to wake up to the threats. Technology giants are looking at creating a new industry “accord” to tackle “deceptive artificial intelligence election content” that is threatening democracy via AI interference.
The Government is also listening. My experiences engaging with the relevant Ministers have shown a deep understanding of our challenges. Westminster’s biggest hurdle is that the solution will take more than legislation. The issues lie at a peculiar intersection between politics, technology, and public education. Many will rightly argue that AI challenges are already covered by electoral law. Still, the rules were not designed to deal with the scale and reach that personalised AI could bring.
Historically, any attempts at election interference or vote rigging in the UK required a human element and this limited in scope and impact. With AI, millions of citizens could be contacted by a deepfake of a family member or the candidate calling them to convince them not to vote. Knowing they have been tricked after the result is announced will not give any citizen the right to correct their mistake. The only way to protect democracy from election interference is through prevention.
Alongside working closely with tech platforms and telecom providers, the most powerful tool to protect democracy will be education. An excellent example is the new anti-fraud campaign launched this week: ‘Stop! Think Fraud, the National Campaign Against Fraud’. We will need the same to warn and educate the public on AI risks ahead of the election.
In the long term, AI opens on many opportunities for society. But the risk of harm is more significant than ever. Bad actors will use this year’s elections to practice and improve their anti-democratic campaigns.
As the saying goes, ‘A lie can travel halfway around the world while the truth is still putting on its boots’. With AI interference in elections, the polls may be closed before the truth has even tied its laces. We may not have time to use the law to protect democracy against AI without risking unintended consequences.
However, partnering with the tech and telecoms industry combined with a high-profile national educational campaign could keep our democracy safe for now by stopping AI from getting far in the first place.