ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12744
18
0

Rethinking Hate Speech Detection on Social Media: Can LLMs Replace Traditional Models?

15 June 2025
Daman Deep Singh
Ramanuj Bhattacharjee
Abhijnan Chakraborty
ArXiv (abs)PDFHTML
Main:14 Pages
3 Figures
Bibliography:3 Pages
2 Tables
Abstract

Hate speech detection across contemporary social media presents unique challenges due to linguistic diversity and the informal nature of online discourse. These challenges are further amplified in settings involving code-mixing, transliteration, and culturally nuanced expressions. While fine-tuned transformer models, such as BERT, have become standard for this task, we argue that recent large language models (LLMs) not only surpass them but also redefine the landscape of hate speech detection more broadly. To support this claim, we introduce IndoHateMix, a diverse, high-quality dataset capturing Hindi-English code-mixing and transliteration in the Indian context, providing a realistic benchmark to evaluate model robustness in complex multilingual scenarios where existing NLP methods often struggle. Our extensive experiments show that cutting-edge LLMs (such as LLaMA-3.1) consistently outperform task-specific BERT-based models, even when fine-tuned on significantly less data. With their superior generalization and adaptability, LLMs offer a transformative approach to mitigating online hate in diverse environments. This raises the question of whether future works should prioritize developing specialized models or focus on curating richer and more varied datasets to further enhance the effectiveness of LLMs.

View on arXiv
@article{singh2025_2506.12744,
  title={ Rethinking Hate Speech Detection on Social Media: Can LLMs Replace Traditional Models? },
  author={ Daman Deep Singh and Ramanuj Bhattacharjee and Abhijnan Chakraborty },
  journal={arXiv preprint arXiv:2506.12744},
  year={ 2025 }
}
Comments on this paper