DE

Innovation for Democracy Cafe
Full Fact AI: Technology for Combating Disinformation

Full Fact Website
© Full Fact is the author of this picture and has authorized FNF to use and publish it under CC BY-NC-ND 4.0

Bad information ruins lives. It promotes hate, damages people’s health, and hurts democracy. It can lead to bad decisions, disrupting public debate on the issues that most affect us, including climate change, immigration, and public spending. At Full Fact, we’re a group of independent fact checkers and campaigners who find, expose and counter those harms.

We believe people deserve better, and we work with politicians, news outlets and organisations to improve the quality of information available to people to strengthen democracy and help them make informed decisions.

Full Fact AI

Fact checking isn’t easy work, but AI tools can help identify and challenge more claims than ever before. Our experience gained over the last decade tells us that fact checking is a hard, rigorous, and complicated process. We know when there’s nuance involved, humans are better than machines. But we are also clear that technology can definitely help us be more efficient.

The combination of AI and human expertise allows Full Fact AI’s users to combat misinformation while maintaining high standards of accuracy and credibility.

Every day, fact checkers worldwide find, check and challenge false claims identified by AI-enabled software produced by our dedicated, in-house AI team. This team has developed a set of tools, called “Full Fact AI”, which is designed to alleviate the pain points experienced in the fact checking process.

In 2019 our organisation was chosen from more than 2,600 nonprofits, social enterprises and research institutions around the world as a winner of the Google AI Impact Challenge. We received $2 million over three years and coaching from Google's AI experts. We built a suite of tools that solve unique challenges faced by fact checkers around the world.

Beyond our work in the UK, Full Fact has also worked with fact checkers in other countries around the world. This mutual network, within the International Fact Checking Network, allowed us to broaden our understanding of the challenges faced by fact checkers in the modern world and how technology can (and can’t) help.

Based on our success, Google.org has invested a further $1.8 million to help us radically scale Full Fact AI and provide it to more users across the global fact checking community. This includes extending the tools to collect content in multiple languages and then analyse it, as well as making them available to users in multiple fact checking organisations globally. Working with Google.org, we are particularly keen to assist fact checking during elections, where the potential harm from misinformation is greater.

The goal

Our aim is to use Full Fact AI to aid fact checking and fact checkers in three key areas: identifying the most prominent thing to check each day; knowing when someone repeats a claim they already know to be false; and checking things in as close to real-time as possible.

Full Fact AI isn't just exclusive to us. Earlier in 2023, Full Fact collaborated with more than 20 Nigerian fact checkers covering the Nigerian presidential elections. During the election period, Full Fact AI collected and analysed over 40,000 checkable claims each day from over 80 media sources. These were then filtered by a sophisticated search function using keywords, claim types, and speakers, as detailed below, to let the fact checkers quickly identify claims which have the potential to cause most harm.

By making the technology available to others, we’re contributing to a collaborative effort, helping media outlets, civil society, platforms, policymakers, and beyond better understand the landscape, and spread the benefits of fact checking.

Full Fact AI is an aid to help the human element of fact checking, and once a claim has been identified, it’s time to take the process offline momentarily whilst a fact check is written up. For our own website, after publication, we describe each fact check with some very specific markup, called ClaimReview. This is part of the wider schema.org project. It describes content on a range of topics in domain specific terms, and helps to make the fact checks machine-readable as well as human-readable.

This is important for us as describing our content so specifically helps ensure that our fact checks can travel further than our own platforms. Fact checks can form a vital part of the web. Over 130,000 fact checks exist in the Google Fact Check Explorer and these were seen over 4 billion times in 2019 in Google Search alone.

Finding claims worth checking

We define a claim as any statement about the world that is either true or false (even if we don’t know which it is). There are many different types of claims, including claims about quantities (“GDP has risen by x%”); claims about cause and effect (“this policy leads to y”); and predictive claims about the future (“the economy will grow by z”). Many claims are not interesting to us as fact checkers, some because they are straightforwardly true and some because they are uncheckable, such as predictions or opinions.

Full Fact AI can identify the names of people, places and organisations mentioned in claims, and identifies who is making each claim, and which political party they represent, where that’s appropriate.

Labelling claims in this way helps filter the volume of data we process from hundreds of thousands of claims to a few thousand per day. This is a vital first step in ensuring that the users of our tools have a chance to make sense of all the information, but it is not enough.

We then filter these claims further by topic (like health or the economy) and identify which sentences are making specific, important and relevant claims. This way, we can ensure that a user can instantly see just the most important claims on each topic, filtered and ranked automatically from the hundreds of thousands of sentences we process every day.

Making fact checks work harder

Alongside the analysis to find claims worth checking, sentences are compared to claims that have already been checked by our fact checkers. We look for repeats of specific claims rather than just new claims on the same broad area. We do this claim matching with a combination of tools looking for semantic similarity as well as shared mentions of specific people, places or organisations.

If a politician or journalist gets it wrong in public, we ask that they correct themselves in public too. We focus on false claims which have the potential to cause the most harm, and where there is the clearest route to change.

Evidence tells us that corrections can be most effective if whoever said the claim corrects it themselves. This helps us to change attitudes and behaviours, encourage a culture of accuracy, and gather evidence about how well the systems meant to stop bad information reaching the public are working. Where appropriate, we can also work with regulators, the press and public and political campaigns to improve the overall state of public information.

ChatGPT and the future

Recently, the public launch of ChatGPT raised these tools in the public’s consciousness and provided the means to generate fluent text on any subject in any style. While there are many creative and professional uses for such tools, they do raise specific issues for society.

By reducing the cost of generating fluent text to effectively zero, there is a risk that bad faith actors will generate floods of low quality content. Some of this can be seen as background noise that can be safely ignored (although that might require extra effort). But some might be deliberate propaganda or disinformation campaigns, targeting individual readers with the messages most likely to have an impact.

Full Fact has been on the frontline of fact checking since 2010. We know firsthand how technology can both help and hinder the process. The prevalence of AI, and its impact are too big to ignore.

Despite all the justified excitement, it must be stressed that ChatGPT and similar tools are not designed to be truthful or accurate. They are trained to produce text that looks superficially similar to the kind of text they have been trained with, while responding to arbitrary prompts. But ChatGPT has no ability to verify or sense-check its own output. Therefore generative AI models cannot be used for fact checking and nor should the output ever be relied on.

This is an article contributed by Dr. David Corney, one of the guests of FNF Global Innovation Hub’s 4th episode of Innovation for Democracy Café: Discover AI for Democracy. Learn more about the Café’s 4th episode from here: https://www.freiheit.org/taiwan/register-4th-episode-our-innovation-democracy-cafe-discover-ai-democracy

Register for the 4th Episode of our Innovation for Democracy Café: Discover AI for Democracy

Innovation For Democracy Episode 4

Do you want to learn how AI can be used for combating disinformation? Register for our online Innovation for Democracy Cafe on June 27, Tuesday, GMT+8 17:00-18:30!

Read more