38
17

Do Users Want Platform Moderation or Individual Control? Examining the Role of Third-Person Effects and Free Speech Support in Shaping Moderation Preferences

Abstract

Online platforms employ commercial content moderators and use automated systems to identify and remove the most blatantly inappropriate content for all users. They also provide moderation settings that let users personalize their preferences for which posts they want to avoid seeing. This study presents the results of a nationally representative survey of 984 US adults. We examine how users would prefer for three categories of norm-violating content (hate speech, sexually explicit content, and violent content) to be regulated. Specifically, we analyze whether users prefer platforms to remove such content for all users or leave it up to each user to decide if and how much they want to moderate it. We explore the influence of presumed effects on others (PME3) and support for freedom of expression on user attitudes, the two critical factors identified as relevant for social media censorship attitudes by prior literature, about this choice. We find perceived negative effects on others and free speech support as significant predictors of preference for having personal moderation settings over platform-directed moderation for regulating each speech category. Our findings show that platform governance initiatives need to account for both the actual and perceived media effects of norm-violating speech categories to increase user satisfaction. Our analysis also suggests that people do not see personal moderation tools as an infringement on others' free speech but as a means to assert greater agency to shape their social media feeds.

View on arXiv
Comments on this paper