ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20097
26
0

S2LPP: Small-to-Large Prompt Prediction across LLMs

26 May 2025
Liang Cheng
Tianyi Li
Zhaowei Wang
Mark Steedman
    LRM
ArXiv (abs)PDFHTML
Main:1 Pages
7 Figures
Bibliography:1 Pages
7 Tables
Appendix:13 Pages
Abstract

The performance of pre-trained Large Language Models (LLMs) is often sensitive to nuances in prompt templates, requiring careful prompt engineering, adding costs in terms of computing and human effort. In this study, we present experiments encompassing multiple LLMs variants of varying sizes aimed at probing their preference with different prompts. Through experiments on Question Answering, we show prompt preference consistency across LLMs of different sizes. We also show that this consistency extends to other tasks, such as Natural Language Inference. Utilizing this consistency, we propose a method to use a smaller model to select effective prompt templates for a larger model. We show that our method substantially reduces the cost of prompt engineering while consistently matching performance with optimal prompts among candidates. More importantly, our experiment shows the efficacy of our strategy across fourteen LLMs and its applicability to a broad range of NLP tasks, highlighting its robustness

View on arXiv
@article{cheng2025_2505.20097,
  title={ S2LPP: Small-to-Large Prompt Prediction across LLMs },
  author={ Liang Cheng and Tianyi LI and Zhaowei Wang and Mark Steedman },
  journal={arXiv preprint arXiv:2505.20097},
  year={ 2025 }
}
Comments on this paper