Aktuelle Projekte

Transparency in Content Moderation

Project |


Even with a consistent definition, implementing content moderation at scale requires an enormous amount of human effort or a reliable machine classifier. Beyond the operational challenges, there are concerns that in some cases content moderation in democracies might constitute political censorship and negatively affect freedom of speech. Furthermore, many have expressed concerns as to whether content moderation might inadvertently harm marginalized groups or could be abused by social media companies in ways that undermine democracy. Because of the challenges of content moderation in democratic societies, many academics, policymakers, and members of civil society have advocated for increased transparency in content moderation. 

While recently, general principles of transparency and accountability have been outlined by civil society organizations, there is very little work on the details of how these principles translate into content moderation practices and their implications for the spread of information in democratic societies. 

This project will seek to explore the idea of transparency in content moderation – both its current practice and its potential impact on both social media platforms and consumers on those platforms. Deploying a multitude of cutting edge methods and with a focus on machine-learning and experiments, the project will seek to understand the current state of transparency in content moderation on major social media platforms and see how different moderation strategies affect users, including bad actors aiming to subvert such strategies.

Project leader(s):

Period:

2023-2028