56
0

Incorporating Token Usage into Prompting Strategy Evaluation

Main:8 Pages
4 Figures
Bibliography:2 Pages
13 Tables
Appendix:10 Pages
Abstract

In recent years, large language models have demonstrated remarkable performance across diverse tasks. However, their task effectiveness is heavily dependent on the prompting strategy used to elicit output, which can vary widely in both performance and token usage. While task performance is often used to determine prompting strategy success, we argue that efficiency--balancing performance and token usage--can be a more practical metric for real-world utility. To enable this, we propose Big-OtokO_{tok}, a theoretical framework for describing the token usage growth of prompting strategies, and analyze Token Cost, an empirical measure of tokens per performance. We apply these to several common prompting strategies and find that increased token usage leads to drastically diminishing performance returns. Our results validate the Big-OtokO_{tok} analyses and reinforce the need for efficiency-aware evaluations.

View on arXiv
@article{sypherd2025_2505.14880,
  title={ Incorporating Token Usage into Prompting Strategy Evaluation },
  author={ Chris Sypherd and Sergei Petrov and Sonny George and Vaishak Belle },
  journal={arXiv preprint arXiv:2505.14880},
  year={ 2025 }
}
Comments on this paper