ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14918
7
0

Reliable Decision Support with LLMs: A Framework for Evaluating Consistency in Binary Text Classification Applications

20 May 2025
F. Megahed
Ying-Ju Chen
L. Allision Jones-Farmer
Younghwa Lee
Jiawei Brooke Wang
Inez M. Zwetsloot
ArXivPDFHTML
Abstract

This study introduces a framework for evaluating consistency in large language model (LLM) binary text classification, addressing the lack of established reliability assessment methods. Adapting psychometric principles, we determine sample size requirements, develop metrics for invalid responses, and evaluate intra- and inter-rater reliability. Our case study examines financial news sentiment classification across 14 LLMs (including claude-3-7-sonnet, gpt-4o, deepseek-r1, gemma3, llama3.2, phi4, and command-r-plus), with five replicates per model on 1,350 articles. Models demonstrated high intra-rater consistency, achieving perfect agreement on 90-98% of examples, with minimal differences between expensive and economical models from the same families. When validated against StockNewsAPI labels, models achieved strong performance (accuracy 0.76-0.88), with smaller models like gemma3:1B, llama3.2:3B, and claude-3-5-haiku outperforming larger counterparts. All models performed at chance when predicting actual market movements, indicating task constraints rather than model limitations. Our framework provides systematic guidance for LLM selection, sample size planning, and reliability assessment, enabling organizations to optimize resources for classification tasks.

View on arXiv
@article{megahed2025_2505.14918,
  title={ Reliable Decision Support with LLMs: A Framework for Evaluating Consistency in Binary Text Classification Applications },
  author={ Fadel M. Megahed and Ying-Ju Chen and L. Allision Jones-Farmer and Younghwa Lee and Jiawei Brooke Wang and Inez M. Zwetsloot },
  journal={arXiv preprint arXiv:2505.14918},
  year={ 2025 }
}
Comments on this paper