Unveiling the Capabilities of Large Language Models in Detecting Offensive Language with Annotation Disagreement

Large Language Models (LLMs) have become essential for offensive language detection, yet their ability to handle annotation disagreement remains underexplored. Disagreement samples, which arise from subjective interpretations, pose a unique challenge due to their ambiguous nature. Understanding how LLMs process these cases, particularly their confidence levels, can offer insight into their alignment with human annotators. This study systematically evaluates the performance of multiple LLMs in detecting offensive language at varying levels of annotation agreement. We analyze binary classification accuracy, examine the relationship between model confidence and human disagreement, and explore how disagreement samples influence model decision-making during few-shot learning and instruction fine-tuning. Our findings reveal that LLMs struggle with low-agreement samples, often exhibiting overconfidence in these ambiguous cases. However, utilizing disagreement samples in training improves both detection accuracy and model alignment with human judgment. These insights provide a foundation for enhancing LLM-based offensive language detection in real-world moderation tasks.
View on arXiv@article{lu2025_2502.06207, title={ Unveiling the Capabilities of Large Language Models in Detecting Offensive Language with Annotation Disagreement }, author={ Junyu Lu and Kai Ma and Kaichun Wang and Kelaiti Xiao and Roy Ka-Wei Lee and Bo Xu and Liang Yang and Hongfei Lin }, journal={arXiv preprint arXiv:2502.06207}, year={ 2025 } }