ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00178
25
0

Tournament of Prompts: Evolving LLM Instructions Through Structured Debates and Elo Ratings

30 May 2025
Anirudh Nair
Adi Banerjee
Laurent Mombaerts
Matthew Hagen
Tarik Borogovac
ArXiv (abs)PDFHTML
Main:7 Pages
4 Figures
Bibliography:3 Pages
4 Tables
Appendix:5 Pages
Abstract

Prompt engineering represents a critical bottleneck to harness the full potential of Large Language Models (LLMs) for solving complex tasks, as it requires specialized expertise, significant trial-and-error, and manual intervention. This challenge is particularly pronounced for tasks involving subjective quality assessment, where defining explicit optimization objectives becomes fundamentally problematic. Existing automated prompt optimization methods falter in these scenarios, as they typically require well-defined task-specific numerical fitness functions or rely on generic templates that cannot capture the nuanced requirements of complex use cases. We introduce DEEVO (DEbate-driven EVOlutionary prompt optimization), a novel framework that guides prompt evolution through a debate-driven evaluation with an Elo-based selection. Contrary to prior work, DEEVOs approach enables exploration of the discrete prompt space while preserving semantic coherence through intelligent crossover and strategic mutation operations that incorporate debate-based feedback, combining elements from both successful and unsuccessful prompts based on identified strengths rather than arbitrary splicing. Using Elo ratings as a fitness proxy, DEEVO simultaneously drives improvement and preserves valuable diversity in the prompt population. Experimental results demonstrate that DEEVO significantly outperforms both manual prompt engineering and alternative state-of-the-art optimization approaches on open-ended tasks and close-ended tasks despite using no ground truth feedback. By connecting LLMs reasoning capabilities with adaptive optimization, DEEVO represents a significant advancement in prompt optimization research by eliminating the need of predetermined metrics to continuously improve AI systems.

View on arXiv
@article{nair2025_2506.00178,
  title={ Tournament of Prompts: Evolving LLM Instructions Through Structured Debates and Elo Ratings },
  author={ Anirudh Nair and Adi Banerjee and Laurent Mombaerts and Matthew Hagen and Tarik Borogovac },
  journal={arXiv preprint arXiv:2506.00178},
  year={ 2025 }
}
Comments on this paper