DE

Publication
FNF Global Innovation Hub Released “AI-Generated Disinformation in the 2024 Taiwan Presidential Election” Publication

AI-Generated Disinformation in the 2024 Taiwan Presidential Election

AI-Generated Disinformation in the 2024 Taiwan Presidential Election

© 2025 Friedrich Naumann Foundation for Freedom (FNF) Global Innovation Hub

Taiwan’s 2024 election became a battleground for AI-generated disinformation. Deepfake videos, fake social media accounts, and AI-powered news anchors spreading false narratives. A viral deepfake scandal attempted to discredit a leading presidential candidate, while AI-driven bot networks amplified pro-China messages. Though the disinformation campaign failed to sway the election outcome, it contributed to political polarization and raised urgent questions about the future of democracy. How can governments, tech companies, and traditional media counter AI-driven manipulation? The answers may surprise you.

Prof. Austin Wang has years of experience in analyzing cognitive warfare, misinformation, and Taiwan politics. We invited Prof. Austin Wang to write this report for us, in hope that Taiwan’s case study can offer a valuable lesson to other democracies facing challenges posed by AI-facilitated disinformation campaign in the digital era. In the report, Prof Austin Wang provided unique insight into how AI-powered disinformation campaign was carried out during Taiwan’s presidential election in 2024, including a comprehensive analysis of an AI-powered disinformation campaign targeting a presidential candidate, the complicated relationship between AI-generated content, public opinion, and the spread of misinformation, and how AI-generated content blurred reality on social media platforms.

AI-facilitated Disinformation Campaigns in Elections: A Case Study of Taiwan

How did AI-generated disinformation exactly impact Taiwan’s presidential election? Prof. Wang offered an example in his publication: On January 9, a week after the release of DPP presidential candidate Lai Ching-te’s latest campaign video titled On the Road, different accounts on X, Facebook, and TikTok simultaneously released a modified version of the video. By using synthesized lip movements and voices, the original campaign message was altered to demonstrate that Lai was admitting to having an illegitimate child and expressing concern about the possibility of the exposure of a potential sex scandal. The modified video was then shared by hundreds of personal accounts and groups on Facebook. Similar videos also appeared on the messaging app Line and were reported to the fact-checking platform CoFacts. Channels that spread the altered video also shared articles and videos from Chinese official state media. The altered video contained a huge amount of simplified Chinese characters that implied its connection to Chinese actors.

The Relationship between AI-Generated Videos, the Spread of Misinformation, and Public Opinion

Although Prof. Wang’s research indicated that the first appearance of the AI-altered video is dated back to November 2023, it wasn’t until January 9, 2025, about two weeks before the election, that the video was massively distributed across social media. Prof. Wang found out that the mass distribution of the altered video not only resulted in a spike in public interest in the illegitimate child rumor but also distracted public attention from the presidential candidate’s policies and polling data. While the AI-generated disinformation didn’t have significant impacts on the overall public opinion in Taiwan, the author’s analysis showed that supporters of Lai Ching-te’s opponents rated Lai more negatively after the mass distribution of the altered video on January 9th.

Blurred Reality: AI-Generated Accounts on Social Media Platforms

In addition to altered video, threat actors also leveraged the power of AI to create fake accounts on social media platforms. Indeed, during his investigation, Prof. Austin Wang discovered that AI-generated Facebook accounts played an important role in the mass distribution of AI-altered video across Taiwan. These accounts, which often featured profile pictures generated by image generation or face-swapping software, are associated with Facebook groups that disseminate pro-China narratives. The usage of AI-generated profile photos not only reduced operational costs but also helped these accounts avoid scrutiny. Moreover, as these profile pictures resemble common features associated with local residents, the content distributed by these accounts appears more authentic.

In addition, more than 20 new YouTube channels appeared before the election, featuring virtual anchors with AI-generated voices. These virtual anchors would read scripts that contain texts of news articles from Chinese state media attacking DPP politicians. Virtual anchors reading the script also made the information seem more authentic or easier to digest for audiences, especially those who prefer audio content over reading lines of text.

Looking Ahead: Impacts of AI-Generated Content on Democracy and Policy Recommendations

While AI-generated content played a minority role in the disinformation campaign leading up to the 2024 presidential election, it still managed to polarize voters and divert attention from other important issues. Such incidents as the publication discussed above are likely to become more frequent and widespread, posing a significant threat to upcoming elections and the future of democracy. When these information operations originate from abroad, they cannot be regulated through Taiwan’s domestic legal framework, which poses a threat to public discourse and democracy. Therefore, mitigating the impact of AI-generated content and foreign interference will require collaborative efforts from governments, businesses, and civil society.

As AI-generated content is likely to become more prominent in upcoming elections, what actions can we take to mitigate the impacts of AI-powered disinformation campaigns? First, Prof. Austin Wang urged social media platforms to be more transparent and release more data that help the general public evaluate the trustworthiness of accounts. We can learn from EU on what data social media should keep and release and what policy should be developed, as seen in EU policies on advertising databases and administrator geolocation. However, they must also monitor cross-border donations and consider potential risks, such as authoritarian regimes exploiting transparency to target activists.

Second, Prof. Austin Wang suggests that traditional media can restore its credibility as AI-generated content increases and overwhelms audiences. With the rise of misinformation, people may turn to long-established media sources to save time and ensure reliability. Experiencing harm from false information could further motivate the public to engage with traditional media, fostering a trust-based relationship.

Third, the author argues that social media companies should proactively disclose information on foreign information manipulation and inauthentic behavior, rather than immediately deleting reported content. Transparent disclosure would help the public recognize past manipulation, enable systematic study of AI-driven disinformation, and support international efforts to address such interference.

Dr. Austin Horng-En Wang is an Associate Political Scientist at RAND Corporation and an Associate Professor in the Department of Political Science at the University of Nevada, Las Vegas. His research focuses on political psychology, social media, cognitive warfare, and US-China-Taiwan relations. His researches appear on Journal of Peace Research, HKS 
Misinformation Review, Journal of Computational Social Science, among others. And his comments appear on The Diplomat, Washington Post, The National Interest, East Asian Forum, etc.

Prof. Austin Wang
Dr. Austin Horng-En Wang