Textual Gradients are a Flawed Metaphor for Automatic Prompt Optimization
Daniel Melcer
Qi Chen
Wen-Hao Chiang
Shweta Garg
Pranav Garg
Christian Bock
- VLM

Main:7 Pages
5 Figures
Bibliography:3 Pages
Appendix:9 Pages
Abstract
A well-engineered prompt can increase the performance of large language models; automatic prompt optimization techniques aim to increase performance without requiring human effort to tune the prompts. One leading class of prompt optimization techniques introduces the analogy of textual gradients. We investigate the behavior of these textual gradient methods through a series of experiments and case studies. While such methods often result in a performance improvement, our experiments suggest that the gradient analogy does not accurately explain their behavior. Our insights may inform the selection of prompt optimization strategies, and development of new approaches.
View on arXivComments on this paper
