Identity-related Speech Suppression in Generative AI Content Moderation

Automated content moderation has long been used to help identify and filter undesired user-generated content online. Generative AI systems now use such filters to keep undesired generated content from being created by or shown to users. From classrooms to Hollywood, as generative AI is increasingly used for creative or expressive text generation, whose stories will these technologies allow to be told, and whose will they suppress? In this paper, we define and introduce measures of speech suppression, focusing on speech related to different identity groups incorrectly filtered by a range of content moderation APIs. Using both short-form, user-generated datasets traditional in content moderation and longer generative AI-focused data, including two datasets we introduce in this work, we create a benchmark for measurement of speech suppression for nine identity groups. Across one traditional and four generative AI-focused automated content moderation services tested, we find that identity-related speech is more likely to be incorrectly suppressed than other speech. We find differences in identity-related speech suppression for traditional versus generative AI data, with APIs performing better on generative AI data but worse on longer text instances, and by identity, with identity-specific reasons for incorrect flagging behavior. Overall, we find that on traditional short-form data incorrectly suppressed speech is likely to be political, while for generative AI creative data it is likely to be television violence.
View on arXiv@article{anigboro2025_2409.13725, title={ Identity-related Speech Suppression in Generative AI Content Moderation }, author={ Oghenefejiro Isaacs Anigboro and Charlie M. Crawford and Grace Proebsting and Danaë Metaxa and Sorelle A. Friedler }, journal={arXiv preprint arXiv:2409.13725}, year={ 2025 } }