DE

Artificial Intelligence and its role in disinformation

Excerpt from the publication Deepfakes & Disinformation
Deepfakes

Although the roots of the technology stretch back to the mid-20th century, artificial intelligence received little attention for a long time. The long AI winter only began to abate in the early 2010s. In 2011, IBM’s computer system Watson beat the best human players in the television quiz show Jeopardy1), Google’s self-driving car prototypes travelled more than 100,000 miles (160,000 kilometres) and Apple introduced their “smart personal assistant” Siri. Since then, public interest in artificial intelligence, and especially in the risks associated with it, has been steadily growing. The discourse on superintelligence – triggered by a book of the same title by Nick Bostrom published in 2014 – generated even more attention. Prominent personalities have since repeatedly warned about AI, sometimes taking on an alarming tone.

Stephen Hawking (“The development of full artificial intelligence could spell the end of the human race.”) and Elon Musk (“AI is a fundamental existential risk for human civilisation.”) are frequently cited. While super- intelligence and so-called “strong AI” (AGI, Artificial General Intelligence) are still in the distant future, “weak AI” and its arguably not-so-weak algorithms are already playing a steadily expanding role in business, society and politics. The author’s opinion is that the effects on health, energy, security, mobility and many other areas will be largely positive. However, we will only be able to enjoy the positive aspects of these developments if we recognise the risks associated with this technology and successfully counteract them.

One such risk is misuse of the technology to deliberately disseminate false information. Of course, politically motivated disinformation is not a new phenomenon. Stalin and Mao are the most prominent examples of dictators who regularly ordered their photographs to be edited to ensure that old images would be consistent with the latest “truth”: anyone who had fallen out of favour was removed from pictures, new additions to the party leadership were retroactively edited in; even the context of pictures was modi ed, for example by changing the background. The goal of manipulating these visual records was to create new facts, to rewrite past events and history itself.

Historically, performing these modi cations was tedious and required specialised knowledge; today, with the right smartphone app, anybody can do the same effortlessly. And the technology has not stopped at photography. Producing a fake video that appears believable still requires a fair deal of effort. But certain methods of artificial intelligence are making it increasingly easy to manipulate existing videos. These videos have become known as “deepfakes”. They are still relatively uncommon on the internet, but as their use and dissemination increases, they are turning into a growing challenge for our society. Not only does manipulated content spread very quickly on platforms such as Facebook or YouTube, it is also speci cally targeted towards users who are receptive to it. Furthermore, the spread of disinformation is increasingly shifting towards messenger services such as WhatsApp. There, encrypted messages are distributed over private connections, this increases the trust in the forwarded information, creating a kind of hidden virality. Encryption of private online com- munications is a desirable commodity, similar to the secrecy of written letters – it prevents messages from being viewed by third parties. But encryption also means that any disseminated information cannot be checked for truthfulness and moderated accordingly.