Taiwan is used to dealing with disinformation, especially from mainland China. The Chinese Communist Party (CCP) desires Taiwan to be reunified with the mainland and the typically more anti-Beijing DPP flies in the face of their expansionist ambitions. Technology which could undermine the DPP is therefore of benefit to the CCP. What is more, generative AI can propagate falsehoods in more destructive ways.
Compared to other tactics used to spread online rumours, such as bot networks, AI-generated content is harder to identify. Indeed, research has shown that AI-generated fake news can even be harder to spot than fake news produced by humans. AI-based social media accounts ‘speak’ more like authentic social media users, producing more complex, targeted and believable rumours.
As such, CCP-linked disinformation efforts have evolved from spouting pro-unification messages, to creating content which appears written by Taiwanese citizens concerned about their society and American influence on it, including rumours around poisoned pork being imported from the US.
Furthermore, AI-based disinformation can better overcome linguistic barriers. Generative AI makes it far easier for rumours to be written in the traditional Chinese characters used in Taiwan, rather than the simplified characters used in mainland China — instead of requiring a human who understands traditional Chinese to write a post or news article, AI can do this instantaneously.
Clearly, the growing danger of AI-generated fake news stems not just from its heightened believability, but also that it enables the resources of bad actors to go further; requiring less human input and technical know-how. Creating disinformation has never been cheaper, faster and easier.
Worse yet, generative AI allows disinformation campaigns to diversify their output— alongside word-based content it can manipulate and create synthetic audio, images and videos – collectively termed “deepfakes”. This enables a multi-sensory assault on reality, which as the outgoing Taiwanese president put it, “create(s) disturbance in the minds of the people”, and inflates the “liar’s dividend”, as people become increasingly distrustful of all online information, whether false or not.
It is little wonder that during election time, when the media is awash with competing political narratives, that deepfakes have their most distortive effects on people’s beliefs.
In the run-up to the Taiwanese election deepfakes of Lai Ching-te circulated on channels including YouTube and TikTok. One video showed Lai being criticised for supposedly having three mistresses, another audio deepfake claimed he went to the US for a job interview. These deepfakes attempted to paint Lai as unsuitable for political office, inflame fears of US influence in Taiwan, and generally pollute public discourse.
Deepfakes are not unique to Taiwan. in the UK, deepfake audio of Keir Starmer shouting at his assistants appeared on X, during last year’s Labour Party conference. Yet, the Taiwanese election should warn the UK that the onslaught of AI-generated fake news will only worsen.
We cannot be so hubristic as to think ourselves immune to the social discord that deepfakes can create. Indeed, recent research by the political consultancy Thinks displayed that UK voters are highly vulnerable to AI-generated deepfakes and increasingly distrust electoral processes. 30 per cent of adults believe a UK election is more likely to be ‘rigged’ than ‘fair’. This toxic combination must be addressed, else we could risk experiencing an AI-triggered equivalent of the January 6th events in the US.
Yet despite the doom and gloom, the rising tide of AI-generated fake news can be countered. Taiwan has been showing the way forward.
Taiwan has developed extensive educational programmes to help make citizens more discerning about the information they read. Further, Taiwan’s tech and civic organisations have been central in the fight against disinformation, and they are employing some innovative tactics involving the use of generative AI to tackle AI-based disinformation.
For example, the organisation Cofacts runs a Chatbot allowing users to enter news stories or messages and to discern if they are fake or not. Cofacts is one of many Taiwanese organisations working to fact-check electoral information. By prioritising education and civil society in its anti-fake news strategy, Taiwan has given the Taiwanese the tools to decide for themselves what information to trust, without becoming overly censorious and placing control of speech in the hands of a few social media platforms.
Although the tangible effects of Taiwan’s anti-disinformation strategy — namely on the election’s outcome — are hard to tell, the victory of Lai perhaps suggests that the Taiwanese electorate has grown resilient to AI-generated lies.
With the UK’s general election mere months away, it will be an uphill struggle to tackle the harms from AI-generated mis-and-disinformation. Nonetheless, we can learn from Taiwan’s experience, so that we can effectively prepare for the AI-based assault on our democracy.
Sarah Kuszynski is a research assistant at Bright Blue.
In a year when nearly half the global population go to the ballot-box, democracies will find themselves on the frontlines of AI-based information warfare. Taiwan recently held its election which saw Lai Ching-te, of the Democratic Progressive Party (DPP), emerge victorious. The election demonstrated the potential of generative AI to be used by bad actors to undermine trust in politicians, distort the truth, and sow discord.
Social media increasingly dominates how we receive information about the world. Technologies such as generative AI, which shape online narratives, are therefore more influential than ever. Indeed, AI-generated mis-and-disinformation is viewed as one of the top risks of 2024. According to a World Economic Forum report, it “could influence voters and lead to protests…violence or radicalization”.
Taiwan is used to dealing with disinformation, especially from mainland China. The Chinese Communist Party (CCP) desires Taiwan to be reunified with the mainland and the typically more anti-Beijing DPP flies in the face of their expansionist ambitions. Technology which could undermine the DPP is therefore of benefit to the CCP. What is more, generative AI can propagate falsehoods in more destructive ways.
Compared to other tactics used to spread online rumours, such as bot networks, AI-generated content is harder to identify. Indeed, research has shown that AI-generated fake news can even be harder to spot than fake news produced by humans. AI-based social media accounts ‘speak’ more like authentic social media users, producing more complex, targeted and believable rumours.
As such, CCP-linked disinformation efforts have evolved from spouting pro-unification messages, to creating content which appears written by Taiwanese citizens concerned about their society and American influence on it, including rumours around poisoned pork being imported from the US.
Furthermore, AI-based disinformation can better overcome linguistic barriers. Generative AI makes it far easier for rumours to be written in the traditional Chinese characters used in Taiwan, rather than the simplified characters used in mainland China — instead of requiring a human who understands traditional Chinese to write a post or news article, AI can do this instantaneously.
Clearly, the growing danger of AI-generated fake news stems not just from its heightened believability, but also that it enables the resources of bad actors to go further; requiring less human input and technical know-how. Creating disinformation has never been cheaper, faster and easier.
Worse yet, generative AI allows disinformation campaigns to diversify their output— alongside word-based content it can manipulate and create synthetic audio, images and videos – collectively termed “deepfakes”. This enables a multi-sensory assault on reality, which as the outgoing Taiwanese president put it, “create(s) disturbance in the minds of the people”, and inflates the “liar’s dividend”, as people become increasingly distrustful of all online information, whether false or not.
It is little wonder that during election time, when the media is awash with competing political narratives, that deepfakes have their most distortive effects on people’s beliefs.
In the run-up to the Taiwanese election deepfakes of Lai Ching-te circulated on channels including YouTube and TikTok. One video showed Lai being criticised for supposedly having three mistresses, another audio deepfake claimed he went to the US for a job interview. These deepfakes attempted to paint Lai as unsuitable for political office, inflame fears of US influence in Taiwan, and generally pollute public discourse.
Deepfakes are not unique to Taiwan. in the UK, deepfake audio of Keir Starmer shouting at his assistants appeared on X, during last year’s Labour Party conference. Yet, the Taiwanese election should warn the UK that the onslaught of AI-generated fake news will only worsen.
We cannot be so hubristic as to think ourselves immune to the social discord that deepfakes can create. Indeed, recent research by the political consultancy Thinks displayed that UK voters are highly vulnerable to AI-generated deepfakes and increasingly distrust electoral processes. 30 per cent of adults believe a UK election is more likely to be ‘rigged’ than ‘fair’. This toxic combination must be addressed, else we could risk experiencing an AI-triggered equivalent of the January 6th events in the US.
Yet despite the doom and gloom, the rising tide of AI-generated fake news can be countered. Taiwan has been showing the way forward.
Taiwan has developed extensive educational programmes to help make citizens more discerning about the information they read. Further, Taiwan’s tech and civic organisations have been central in the fight against disinformation, and they are employing some innovative tactics involving the use of generative AI to tackle AI-based disinformation.
For example, the organisation Cofacts runs a Chatbot allowing users to enter news stories or messages and to discern if they are fake or not. Cofacts is one of many Taiwanese organisations working to fact-check electoral information. By prioritising education and civil society in its anti-fake news strategy, Taiwan has given the Taiwanese the tools to decide for themselves what information to trust, without becoming overly censorious and placing control of speech in the hands of a few social media platforms.
Although the tangible effects of Taiwan’s anti-disinformation strategy — namely on the election’s outcome — are hard to tell, the victory of Lai perhaps suggests that the Taiwanese electorate has grown resilient to AI-generated lies.
With the UK’s general election mere months away, it will be an uphill struggle to tackle the harms from AI-generated mis-and-disinformation. Nonetheless, we can learn from Taiwan’s experience, so that we can effectively prepare for the AI-based assault on our democracy.