Toward Structured Knowledge Reasoning: Contrastive Retrieval-Augmented Generation on Experience
- RALM

Large language models (LLMs) achieve strong performance on plain text tasks but underperform on structured data like tables and databases. Potential challenges arise from their underexposure during pre-training and rigid text-to-structure transfer mechanisms. Unlike humans who seamlessly apply learned patterns across data modalities, LLMs struggle to infer implicit relationships embedded in tabular formats, especially in the absence of explicit structural guidance. To bridge this cognitive gap, we introduce Contrastive Retrieval-Augmented Generation on Experience (CoRE), a framework that builds experience memory representations and enhances generalization through contrastive In-Context Learning (ICL) to simulate human-like knowledge transfer. Experiments on Text-to-SQL and TableQA show CoRE significantly improves performance, achieving average gains of 3.44% and 4.24%, with up to 17.2% on challenging tasks. Our Monte Carlo Tree Search (MCTS)-generated Experience Memory expands training data 8-9x, enhancing diversity and domain coverage. This training-free and continual method propels LLMs toward structured knowledge expertise.
View on arXiv@article{gu2025_2506.00842, title={ Toward Structured Knowledge Reasoning: Contrastive Retrieval-Augmented Generation on Experience }, author={ Jiawei Gu and Ziting Xian and Yuanzhen Xie and Ye Liu and Enjie Liu and Ruichao Zhong and Mochi Gao and Yunzhi Tan and Bo Hu and Zang Li }, journal={arXiv preprint arXiv:2506.00842}, year={ 2025 } }