The Role of Diversity in In-Context Learning for Large Language Models

In-context learning (ICL) is a crucial capability of current large language models (LLMs), where the selection of examples plays a key role in performance. While most existing approaches focus on selecting the most similar examples to the query, the impact of diversity in example selection remains underexplored. We systematically investigate the role of diversity in in-context example selection through experiments across a range of tasks, from sentiment classification to more challenging math and code problems. Experiments on Llama-3.1, Gemma-2, and Mistral-v0.3 families of models show that diversity-aware selection methods improve performance, particularly on complex tasks like math and code, and enhance robustness to out-of-distribution queries. To support these findings, we introduce a theoretical framework that explains the benefits of incorporating diversity in in-context example selection.
View on arXiv@article{xiao2025_2505.19426, title={ The Role of Diversity in In-Context Learning for Large Language Models }, author={ Wenyang Xiao and Haoyu Zhao and Lingxiao Huang }, journal={arXiv preprint arXiv:2505.19426}, year={ 2025 } }