ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19966
19
0

Learning to Select In-Context Demonstration Preferred by Large Language Model

26 May 2025
Zheng Zhang
Shaocheng Lan
Lei Song
Jiang Bian
Yexin Li
Kan Ren
ArXiv (abs)PDFHTML
Main:8 Pages
4 Figures
Bibliography:3 Pages
10 Tables
Appendix:5 Pages
Abstract

In-context learning (ICL) enables large language models (LLMs) to adapt to new tasks during inference using only a few demonstrations. However, ICL performance is highly dependent on the selection of these demonstrations. Recent work explores retrieval-based methods for selecting query-specific demonstrations, but these approaches often rely on surrogate objectives such as metric learning, failing to directly optimize ICL performance. Consequently, they struggle to identify truly beneficial demonstrations. Moreover, their discriminative retrieval paradigm is ineffective when the candidate pool lacks sufficient high-quality demonstrations. To address these challenges, we propose GenICL, a novel generative preference learning framework that leverages LLM feedback to directly optimize demonstration selection for ICL. Experiments on 19 datasets across 11 task categories demonstrate that GenICL achieves superior performance than existing methods in selecting the most effective demonstrations, leading to better ICL performance.

View on arXiv
@article{zhang2025_2505.19966,
  title={ Learning to Select In-Context Demonstration Preferred by Large Language Model },
  author={ Zheng Zhang and Shaocheng Lan and Lei Song and Jiang Bian and Yexin Li and Kan Ren },
  journal={arXiv preprint arXiv:2505.19966},
  year={ 2025 }
}
Comments on this paper