DE

EU-policy
European Democracy Action Plan: Mechanisms Over Content

The EU Commission chooses the right approaches to regulate disinformation
European Democracy Action Plan: Mechanisms Over Content
© picture alliance | CHROMORANGE / Bilderboxv

While in Germany one often has the feeling that disinformation does not pose a serious threat to social cohesion and democratic discourse, the debate has already progressed considerably at the European level. With the “European Democracy Action Plan” presented on 3 December 2020, the EU Commission, in particular the liberal Vice-President of the Commission Věra Jourová, shows that it has not only understood the problems caused by disinformation, but that it is also choosing the right regulatory approaches.

“We do not want to create a ministry of truth, freedom of speech is essential”, said Jourová when presenting the Action Plan. This is also reflected in the measures presented against disinformation, which refrain from regulating content. This is something that is rarely, if ever, heard in the German debate on measures against disinformation. The debate among politicians – if it is taking place at all – usually takes only ranges between making “fake news” punishable and “deleting content”. But neither of these two poles is feasible, justifiable under the rule of law or sustainable in its impact.

It is not only good news that the Action Plan consistently speaks of disinformation and avoids the fuzzy combat term “fake news”, but also that there is a shift from previously ineffective self-regulation to regulated self-regulation (co-regulation). To this end, the Action Plan provides, for example, that fact-checking by partner organisations on social media platforms must be carried out much more transparently and by means of defined standards. These standards should provide a framework according to which the cooperating fact-checking organisations check content and, if necessary, mark it as “misleading” or “false”. This would not only serve platform users, but also provide the basis for better cooperation between social media platforms and fact-checkers. Measures against disinformation on the platforms should also be monitored more closely and their effectiveness reviewed according to defined success factors. Likewise, the long overdue provision of data for research is to be ensured, in compliance with the General Data Protection Regulation. To this end, a corresponding framework is to be developed with the involvement of all relevant stakeholders.

It is also positive to see that the Action Plan focuses on the mechanisms that contribute to the spread of disinformation. The Commission has taken the right approach here by addressing the actors or the dissemination mechanisms rather than the actual content. Apart from illegal content, it is simply not possible to regulate these mechanisms and would amount to a ministry of truth. The Action Plan proposes that platforms must take measures to prevent the artificial increase in the dissemination of disinformation. Twitter already implemented this in the US presidential elections, for example, when it was no longer possible to like or retweet Donald Trump’s tweets marked as “misleading” or “false”. Facebook refrained from such functions and only marked – quite late – false information by the President.

Additionally, platforms are supposed to make it more difficult to profit financially from disinformation. This is aimed primarily at those actors who see disinformation not as a tool for dividing societies, but as a means of financial gain. To this end, platforms should ensure that, for example, advertisements are no longer shown in videos that have been classified as “misleading” or “false”. Similarly, in the past, criminals had created “news portals” that generated clicks on websites using lurid, invented “news”. Advertising networks have earned substantial sums of money by displaying banners on websites or in videos. This measure targets both advertising networks on social media platforms and networks that display advertising on third-party sites, such as Google’s AdSense.

The Action Plan presented is no more than a plan. But it gives hope that mechanisms and modes of action that contribute to the threat to democracies in the digital space have been understood. It focuses on the mechanisms that are partly responsible for the rapid digital dissemination of disinformation on platforms, and not on the content. In doing so, it protects our fundamental and civil rights. The proposed measures can only be a start for good regulation of the digital space to protect our democracies. The interaction with the Digital Services Act to be presented on 15 December 2020 will be exciting.