Reconstructing Context: Evaluating Advanced Chunking Strategies for Retrieval-Augmented Generation

Retrieval-augmented generation (RAG) has become a transformative approach for enhancing large language models (LLMs) by grounding their outputs in external knowledge sources. Yet, a critical question persists: how can vast volumes of external knowledge be managed effectively within the input constraints of LLMs? Traditional methods address this by chunking external documents into smaller, fixed-size segments. While this approach alleviates input limitations, it often fragments context, resulting in incomplete retrieval and diminished coherence in generation. To overcome these shortcomings, two advanced techniques, late chunking and contextual retrieval, have been introduced, both aiming to preserve global context. Despite their potential, their comparative strengths and limitations remain unclear. This study presents a rigorous analysis of late chunking and contextual retrieval, evaluating their effectiveness and efficiency in optimizing RAG systems. Our results indicate that contextual retrieval preserves semantic coherence more effectively but requires greater computational resources. In contrast, late chunking offers higher efficiency but tends to sacrifice relevance and completeness.
View on arXiv@article{merola2025_2504.19754, title={ Reconstructing Context: Evaluating Advanced Chunking Strategies for Retrieval-Augmented Generation }, author={ Carlo Merola and Jaspinder Singh }, journal={arXiv preprint arXiv:2504.19754}, year={ 2025 } }