Social media platforms must filter sexist content in compliance with governmental regulations. Current machine learning approaches can reliably detect sexism based on standardized definitions, but often neglect the subjective nature of sexist language and fail to consider individual users' perspectives. To address this gap, we adopt a perspectivist approach, retaining diverse annotations rather than enforcing gold-standard labels or their aggregations, allowing models to account for personal or group-specific views of sexism. Using demographic data from Twitter, we employ large language models (LLMs) to personalize the identification of sexism.
View on arXiv@article{paula2025_2505.11795, title={ The Effects of Demographic Instructions on LLM Personas }, author={ Angel Felipe Magnossão de Paula and J. Shane Culpepper and Alistair Moffat and Sachin Pathiyan Cherumanal and Falk Scholer and Johanne Trippas }, journal={arXiv preprint arXiv:2505.11795}, year={ 2025 } }