ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09341
63
0

An Evaluation of LLMs for Detecting Harmful Computing Terms

12 March 2025
Joshua Jacas
Hana Winchester
Alicia Boyd
Brittany Johnson
ArXivPDFHTML
Abstract

Detecting harmful and non-inclusive terminology in technical contexts is critical for fostering inclusive environments in computing. This study explores the impact of model architecture on harmful language detection by evaluating a curated database of technical terms, each paired with specific use cases. We tested a range of encoder, decoder, and encoder-decoder language models, including BERT-base-uncased, RoBERTa large-mnli, Gemini Flash 1.5 and 2.0, GPT-4, Claude AI Sonnet 3.5, T5-large, and BART-large-mnli. Each model was presented with a standardized prompt to identify harmful and non-inclusive language across 64 terms. Results reveal that decoder models, particularly Gemini Flash 2.0 and Claude AI, excel in nuanced contextual analysis, while encoder models like BERT exhibit strong pattern recognition but struggle with classification certainty. We discuss the implications of these findings for improving automated detection tools and highlight model-specific strengths and limitations in fostering inclusive communication in technical domains.

View on arXiv
@article{jacas2025_2503.09341,
  title={ An Evaluation of LLMs for Detecting Harmful Computing Terms },
  author={ Joshua Jacas and Hana Winchester and Alicia Boyd and Brittany Johnson },
  journal={arXiv preprint arXiv:2503.09341},
  year={ 2025 }
}
Comments on this paper