Squeaky-clean internet
Wouter met Martin Riedl in the middle of Berlin to discuss how platforms keep the internet "clean" from undesirable content – but what is undesirable? And what does it mean for moderators when they click through thousand of graphic images?
To assure that the content on social media platforms adheres to the standards set by those platforms thousand of content moderators decide everyday which content should be deleted. Martin Riedl, former fellow and currently associated researcher at the HIIG, is interested in the labour conditions and well-being of those workers. In his current research, he looks at the question: how can we reveal the minimum amount of information to a human reviewer such that an objectionable image can still be correctly identified?
In order to find an answer, a current study looks at how blurred images affect the moderation experience with respect to accuracy and emotional well-being: But Who Protects the Moderators? The Case of Crowdsourced Image Moderation (Brandon Dang, Martin J. Riedl, Matthew Lease, 2018) – the working paper was presented at the 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018) and the 6th ACM Collective Intelligence Conference (CI 2018) in Zürich, Switzerland.
Comments
New comment