DE

Innovation for Democracy Café
The 4th I4D Café: How can AI Tools Be a Pushing Force for Democracy?

AI
© Toey Andante / Shutterstock.com

Artificial intelligence is a trending topic in modern-day society. We know that AI can assist us in various ways, such as writing a song, coding a computer program, and answering questions. However, the contents generated by AI may not be entirely accurate or original. On top of that, people with bad intentions might disseminate disinformation with the help of AI tools. So how can we turn this controversial technology into a positive helping force to facilitate democracy?

In the 4th episode of the Innovation for Democracy Café, Ms. Ya-wei Chou from FNF Global Innovation Hub was joined by two experts to explore the beneficial uses of AI: Dr. David Corney from Full Fact in the United Kingdom and Mr. Ethan Tu from Taiwan AI Labs in Taiwan.

Using AI to Facilitate Fact-checking

“Bad information ruins lives.” Dr. Corney started his sharing with this powerful statement. This concept is what drives Full Fact to strive for a better information society because the better information we have, the better decisions we can make.

At Full Fact, they apply AI tools to combat disinformation and false claims made by politicians, journalists, and news outlets. They developed Full Fact AI, an AI driven software. Fact checkers can use it to identify the most relevant claims that are worth checking, and the information will be filtered and labelled. Through this process, fact-checkers can therefore prioritize the most important claims to tackle first. Moreover, with another technique, called claim matching, they are able to quickly find out repeated wrong statements by looking at similar wordings or the mention of the same information. Fact-checking is indeed a time-consuming process, but with the assistance of AI, fact checkers can now work much more efficiently.

The results of this fact-checking are not limited to the Internet because Full Fact will utilize its effect to the offline world. After fact-checking, they will actively reach out to people and organizations making false claims, asking them to correct the wrong information. Furthermore, they are constantly developing new technology to combat misinformation internationally and at the internet-level, in order to prevent misleading information from repeating. Finally, to improve the overall environment, they also call for fundamental changes such as law amendments to curb the spread of bad information online during elections.

Using AI to Detect the Malicious Spread of Disinformation

It is widely known that Taiwan stands at the frontline of combating information operation for it is under great influence of disinformation mostly from the People’s Republic of China and other countries on a daily basis. Disinformation as such can easily spread through the Internet and provoke chaos around the world and destroy democracy.

Seeing this challenge, Taiwan AI Labs adopted a different approach than Full Fact. They use AI to track and analyze information that has been publicized on mass media or social media, including Facebook, Twitter, PTT (online bulletin board system in Taiwan), and even Weibo. Once they see an abnormal trend of spread of information, they can detect a malicious information operation.

Then, they apply natural language processing to build up a knowledge graph, and finally reveal the information manipulation behind misleading posts and articles.

They also use AI to detect trolls and analyze their behavior. According to Taiwan AI Labs, on social media, the accounts can be divided into two groups – organic accounts and troll accounts. Organic accounts behave just like normal human beings who browse the internet as they please. Troll accounts on the other hand, tend to have a high similarity in behavior and low diversity in content. For example, based on Taiwan AI Labs’ research, troll accounts are usually active from 9 to 5, from Monday to Friday, posting the same false information on social media again and again. Take the example of troll accounts on PTT. According to Mr. Tu, in the midst of the pandemic, troll accounts on PTT were most active at 2 pm, when Taiwan’s CDC announced the daily updates about the COVID-19 pandemic.

Based on the above projects, Taiwan AI Labs launched a website called “infodemic.cc”. On this website, you can put in the information you see, and then the website will check if this information has been manipulated based on its knowledge of user behavior.

Taiwan AI Labs also initiated Taiwan Project Lutein to investigate whether the applied social media content moderation mechanism is fair and fulfills its purpose of removing malicious information. In collaboration with Taiwan FactCheck Center, the Labs found that a lot of misinformation in Taiwan is actually not removed from social media even after fact-checking. Hate speech and misinformation only accounted for 1.6% of the content that was removed or blocked. Moreover, the Labs also found that around 98% of the removed or blocked content is mostly about geopolitical issues, which should still be in the realm of freedom of speech and should not be removed.

Seeing these results, Taiwan AI Labs developed the Miin app to provide the public with diverse sources of information and to prevent the public from only receiving information curated by social media’s biased content moderation mechanism. The app collects news pieces from various media outlets, and provides information on whether the statements are manipulated. With these tools, Taiwan AI Labs hopes to raise the awareness of people in Taiwan and around the globe about information manipulation.

Building Trust with AI

These AI applications sound helpful, but for people who are not technology experts and do not fully understand how they are made, how do they know whether to trust them? Echoing this question, an audience member asked, “How should we build an AI that is trustworthy?” Our panelists responded that it is crucial for these organizations working with AI to build trust with their target audience and the general public, and to develop trustworthy AI. As Dr. Corney noted, Full Fact is working largely on political issues, so it is important for them to remain unbiased and transparent, in order to gain the public’s trust. Full Fact is a registered organization in the United Kingdom, so it has to disclose everything a registered charity should disclose, and the public can use this information to monitor whether Full Fact is doing the right thing. As for Taiwan AI Labs, Mr. Tu stated that they follow the principle of trustworthy AI computing, which ensures that the platforms or algorithms they use are transparent, traceable, understandable, and verifiable.

Should We Embrace or Stop the Development of AI?

Ms. Chou also asked whether human beings should continue researching and developing AI after finding so much controversy and risks hidden behind it. Both panelists replied that it is impossible and impractical to stop AI development nowadays. Therefore, as suggested by Dr. Corney, we should carefully embrace the technology, and raise awareness of AI’s presence in social media and in our lives. Furthermore, digital and media literacy should be a top priority as AI advances. If most people are properly educated about AI, they will know how to use it sensibly, thus reducing the potential risks.

Finally, on the topic of using AI to foster democracy, Ms. Ya-wei Chou asked the panelists whether they could name two to three terms, tools, or words about AI that democracy advocates should know. Ms. Chou elaborated, that democracy advocates seem to be in a more disadvantageous position to acquire knowledge on AI in comparison to big businesses, governments, or even authoritarian actors. In response, Mr. Tu mentioned that there are many free AI tools or programs available online, such as the free version of ChatGPT. He stressed that people should first try out the tools themselves. Only then can they comprehend how powerful such tools are and be truly mindful of the benefits and potential dangers of using AI in their advocacy work.