55
0

Exploring Explanations Improves the Robustness of In-Context Learning

Main:9 Pages
13 Figures
Bibliography:4 Pages
6 Tables
Appendix:9 Pages
Abstract

In-context learning (ICL) has emerged as a successful paradigm for leveraging large language models (LLMs). However, it often struggles to generalize beyond the distribution of the provided demonstrations. A recent advancement in enhancing robustness is ICL with explanations (X-ICL), which improves prediction reliability by guiding LLMs to understand and articulate the reasoning behind correct labels. Building on this approach, we introduce an advanced framework that extends X-ICL by systematically exploring explanations for all possible labels (X2^2-ICL), thereby enabling more comprehensive and robust decision-making. Experimental results on multiple natural language understanding datasets validate the effectiveness of X2^2-ICL, demonstrating significantly improved robustness to out-of-distribution data compared to the existing ICL approaches.

View on arXiv
@article{honda2025_2506.02378,
  title={ Exploring Explanations Improves the Robustness of In-Context Learning },
  author={ Ukyo Honda and Tatsushi Oka },
  journal={arXiv preprint arXiv:2506.02378},
  year={ 2025 }
}
Comments on this paper