Exploring Explanations Improves the Robustness of In-Context Learning
- LRM

In-context learning (ICL) has emerged as a successful paradigm for leveraging large language models (LLMs). However, it often struggles to generalize beyond the distribution of the provided demonstrations. A recent advancement in enhancing robustness is ICL with explanations (X-ICL), which improves prediction reliability by guiding LLMs to understand and articulate the reasoning behind correct labels. Building on this approach, we introduce an advanced framework that extends X-ICL by systematically exploring explanations for all possible labels (X-ICL), thereby enabling more comprehensive and robust decision-making. Experimental results on multiple natural language understanding datasets validate the effectiveness of X-ICL, demonstrating significantly improved robustness to out-of-distribution data compared to the existing ICL approaches.
View on arXiv@article{honda2025_2506.02378, title={ Exploring Explanations Improves the Robustness of In-Context Learning }, author={ Ukyo Honda and Tatsushi Oka }, journal={arXiv preprint arXiv:2506.02378}, year={ 2025 } }