ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20500
47
0

Beyond Keywords: Evaluating Large Language Model Classification of Nuanced Ableism

26 May 2025
Naba Rizvi
Harper Strickland
Saleha Ahmedi
Aekta Kallepalli
Isha Khirwadkar
William Wu
Imani Munyaka
Nedjma Ousidhoum
    ELM
ArXiv (abs)PDFHTML
Main:8 Pages
4 Figures
Bibliography:3 Pages
2 Tables
Appendix:2 Pages
Abstract

Large language models (LLMs) are increasingly used in decision-making tasks like résumé screening and content moderation, giving them the power to amplify or suppress certain perspectives. While previous research has identified disability-related biases in LLMs, little is known about how they conceptualize ableism or detect it in text. We evaluate the ability of four LLMs to identify nuanced ableism directed at autistic individuals. We examine the gap between their understanding of relevant terminology and their effectiveness in recognizing ableist content in context. Our results reveal that LLMs can identify autism-related language but often miss harmful or offensive connotations. Further, we conduct a qualitative comparison of human and LLM explanations. We find that LLMs tend to rely on surface-level keyword matching, leading to context misinterpretations, in contrast to human annotators who consider context, speaker identity, and potential impact. On the other hand, both LLMs and humans agree on the annotation scheme, suggesting that a binary classification is adequate for evaluating LLM performance, which is consistent with findings from prior studies involving human annotators.

View on arXiv
@article{rizvi2025_2505.20500,
  title={ Beyond Keywords: Evaluating Large Language Model Classification of Nuanced Ableism },
  author={ Naba Rizvi and Harper Strickland and Saleha Ahmedi and Aekta Kallepalli and Isha Khirwadkar and William Wu and Imani N. S. Munyaka and Nedjma Ousidhoum },
  journal={arXiv preprint arXiv:2505.20500},
  year={ 2025 }
}
Comments on this paper