Enhancing Granular Sentiment Classification with Chain-of-Thought Prompting in Large Language Models

Abstract
We explore the use of Chain-of-Thought (CoT) prompting with large language models (LLMs) to improve the accuracy of granular sentiment categorization in app store reviews. Traditional numeric and polarity-based ratings often fail to capture the nuanced sentiment embedded in user feedback. We evaluated the effectiveness of CoT prompting versus simple prompting on 2000 Amazon app reviews by comparing each method's predictions to human judgements. CoT prompting improved classification accuracy from 84% to 93% highlighting the benefit of explicit reasoning in enhancing sentiment analysis performance.
View on arXiv@article{miriyala2025_2505.04135, title={ Enhancing Granular Sentiment Classification with Chain-of-Thought Prompting in Large Language Models }, author={ Vihaan Miriyala and Smrithi Bukkapatnam and Lavanya Prahallad }, journal={arXiv preprint arXiv:2505.04135}, year={ 2025 } }
Comments on this paper