63
2

Introducing MAPO: Momentum-Aided Gradient Descent Prompt Optimization

Main:6 Pages
4 Figures
Bibliography:1 Pages
4 Tables
Appendix:1 Pages
Abstract

Momentum-Aided Prompt Optimization (MAPO) enhances the efficiency and efficacy of prompt optimization for Large Language Models (LLMs). Building on ProTeGi, MAPO uses positive natural language "gradients" and a momentum-based extension to refine prompts effectively. By tracking gradient history, MAPO avoids local minima and oscillations. It also utilizes beam search and an Upper Confidence Bound (UCB) algorithm for balanced candidate expansion and selection. Benchmark testing shows that MAPO achieves faster convergence time with fewer API calls and higher F1 scores than ProTeGi, proving it as a robust and scalable solution for automated prompt engineering in LLMs.

View on arXiv
@article{cui2025_2410.19499,
  title={ Introducing MAPO: Momentum-Aided Gradient Descent Prompt Optimization },
  author={ Anthony Cui and Pranav Nandyalam and Andrew Rufail and Ethan Cheung and Aiden Lei and Kevin Zhu and Sean O'Brien },
  journal={arXiv preprint arXiv:2410.19499},
  year={ 2025 }
}
Comments on this paper