When Does Divide and Conquer Work for Long Context LLM? A Noise Decomposition Framework
- LLMAG

We investigate the challenge of applying Large Language Models (LLMs) to long texts. We propose a theoretical framework that distinguishes the failure modes of long context tasks into three categories: cross-chunk dependence (task noise), confusion that grows with context size (model noise), and the imperfect integration of partial results (aggregator noise). Under this view, we analyze when it is effective to use multi-agent chunking, i.e., dividing a length sequence into smaller chunks and aggregating the processed results of each chunk. Our experiments on tasks such as retrieval, question answering, and summarization confirm both the theoretical analysis and the conditions that favor multi-agent chunking. By exploring superlinear model noise growth with input length, we also explain why, for large inputs, a weaker model configured with chunk-based processing can surpass a more advanced model like GPT4o applied in a single shot. Overall, we present a principled understanding framework and our results highlight a direct pathway to handling long contexts in LLMs with carefully managed chunking and aggregator strategies.
View on arXiv@article{xu2025_2506.16411, title={ When Does Divide and Conquer Work for Long Context LLM? A Noise Decomposition Framework }, author={ Zhen Xu and Shang Zhu and Jue Wang and Junlin Wang and Ben Athiwaratkun and Chi Wang and James Zou and Ce Zhang }, journal={arXiv preprint arXiv:2506.16411}, year={ 2025 } }