34
0

Scaling Context, Not Parameters: Training a Compact 7B Language Model for Efficient Long-Context Processing

Abstract

We present MegaBeam-Mistral-7B, a language model that supports 512K-token context length. Our work addresses practical limitations in long-context training, supporting real-world tasks such as compliance monitoring and verification. Evaluated on three long-context benchmarks, our 7B-parameter model demonstrates superior in-context learning performance on HELMET and robust retrieval and tracing capability on RULER. It is currently the only open model to achieve competitive long-range reasoning on BABILong at 512K context length without RAG or targeted fine-tuning. Released as fully open source under the Apache 2.0 license, the model has been downloaded over 100,000 times on Hugging Face. Model available at:this https URL

View on arXiv
@article{wu2025_2505.08651,
  title={ Scaling Context, Not Parameters: Training a Compact 7B Language Model for Efficient Long-Context Processing },
  author={ Chen Wu and Yin Song },
  journal={arXiv preprint arXiv:2505.08651},
  year={ 2025 }
}
Comments on this paper