DE

Digital Transformation
Taming the Digital Realm: Global Content Moderation Practices

DSA Cover
© jossnat / Shutterstock.com

In this in-depth publication on global content moderation practices, we delve into the challenges and implications of regulating online platforms. Each case study sheds light on the unique sociopolitical context that shapes content moderation laws and their enforcement. By examining legislative attempts from Germany, the European Union (EU), the United Kingdom, Sri Lanka, Africa, Latin America, and Taiwan, we aim to provide insight into how governments are responding to the digital realm, where tons of information is created daily.

Like many other countries, in 2022, Taiwan was seeking to regulate online platforms and impose obligations on service providers regarding transparency requirements and content moderation. The Taiwanese government proposed a draft of the Digital Intermediary Services Act (DISA) but soon faced criticism from civil society groups, industry associations, and the general public. In response to these concerns, Taiwan swiftly suspended the process of launching the DISA, citing the lack of public consensus. This incident exemplifies the international influence of legislation adopted by leading democracies, as the DISA draft closely mirrored the Digital Service Act (DSA) of the EU. Yet, it also serves as a cautionary reminder that in jurisdictions where safeguards are not as sufficient, replication as such may compromise digital rights and freedom of expression online, if not become a tool abused by authorities. For instance, the German case study points out how authoritarian regimes, such as Russia and Singapore, invoke similar measures to the German Network Enforcement Act (Netzwerkdurchsetzungsgesetz, NetzDG) to oppress dissidents, albeit without incorporating the rule-of-law principle and checking mechanisms that exist in Germany.

Law enforcement for content moderation must consider the local sociopolitical and cultural context, as emphasized by the authors of the Latin America and Africa chapters in this publication. For example, in Kenya, both Meta and TikTok have outsourced the responsibility of checking online content to third-party services. However, these service providers failed to ensure that moderators understood the language, nor did they support them with sufficient mental health resources to cope with traumatizing content from their assigned tasks. Collaborative decision-making processes and compliance with international digital rights standards are also crucial aspects. Unfortunately, in Brazil, the civil society actors were not actively included in forming regulations combating fake news, leading to the question of whether such laws would lead to self-censorship and the future penalization of legitimate speech. Even worse, when content moderation outcomes contradict official narratives, some African governments deploy internet shutdowns and platform bans to punish social media platforms, as seen in Uganda and Nigeria.

Hate speech, disinformation, and harmful online content have long been evolving issues that threaten our democracy. However, governments need to strike a delicate balance between content moderation, holding digital platforms accountable, and upholding users’ fundamental rights. For countries with poor track records in law enforcement, initiatives that empower stakeholders without granting additional regulatory power to the authorities could help create a better information ecosystem, as suggested in the case study of Sri Lanka.

In conclusion, this publication offers valuable insights into content moderation practices across different regions. By analyzing the successes, challenges, and potential pitfalls, we aim to contribute to the ongoing debates on creating effective and contextually legal frameworks that protect both freedom of expression and the well-being of users. We hope this publication can serve as a start for further discussions, policy-making, and international collaboration on the topic of content moderation.