5
0

Advancing Harmful Content Detection in Organizational Research: Integrating Large Language Models with Elo Rating System

Main:11 Pages
5 Figures
4 Tables
Abstract

Large language models (LLMs) offer promising opportunities for organizational research. However, their built-in moderation systems can create problems when researchers try to analyze harmful content, often refusing to follow certain instructions or producing overly cautious responses that undermine validity of the results. This is particularly problematic when analyzing organizational conflicts such as microaggressions or hate speech. This paper introduces an Elo rating-based method that significantly improves LLM performance for harmful content analysis In two datasets, one focused on microaggression detection and the other on hate speech, we find that our method outperforms traditional LLM prompting techniques and conventional machine learning models on key measures such as accuracy, precision, and F1 scores. Advantages include better reliability when analyzing harmful content, fewer false positives, and greater scalability for large-scale datasets. This approach supports organizational applications, including detecting workplace harassment, assessing toxic communication, and fostering safer and more inclusive work environments.

View on arXiv
@article{akben2025_2506.16575,
  title={ Advancing Harmful Content Detection in Organizational Research: Integrating Large Language Models with Elo Rating System },
  author={ Mustafa Akben and Aaron Satko },
  journal={arXiv preprint arXiv:2506.16575},
  year={ 2025 }
}
Comments on this paper