ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20674
17
0

Pretraining Language Models to Ponder in Continuous Space

27 May 2025
Boyi Zeng
Shixiang Song
Siyuan Huang
Yixuan Wang
He Li
Ziwei He
Xinbing Wang
Zhiyu Li
Zhouhan Lin
    LRM
ArXivPDFHTML
Abstract

Humans ponder before articulating complex sentence elements, enabling deeper cognitive processing through focused effort. In this work, we introduce this pondering process into language models by repeatedly invoking the forward process within a single token generation step. During pondering, instead of generating an actual token sampled from the prediction distribution, the model ponders by yielding a weighted sum of all token embeddings according to the predicted token distribution. The generated embedding is then fed back as input for another forward pass. We show that the model can learn to ponder in this way through self-supervised learning, without any human annotations. Our method is straightforward and can be seamlessly integrated with various existing language models. Experiments across three widely used open-source architectures-GPT-2, Pythia, and LLaMA-and extensive downstream task evaluations demonstrate the effectiveness and generality of our method. For language modeling tasks, pondering language models achieve performance comparable to vanilla models with twice the number of parameters. On 9 downstream benchmarks, our pondering-enhanced Pythia models significantly outperform the official Pythia models. Notably, pondering-enhanced Pythia-1B is comparable to TinyLlama-1.1B, which is trained on 10 times more data. The code is available atthis https URL.

View on arXiv
@article{zeng2025_2505.20674,
  title={ Pretraining Language Models to Ponder in Continuous Space },
  author={ Boyi Zeng and Shixiang Song and Siyuan Huang and Yixuan Wang and He Li and Ziwei He and Xinbing Wang and Zhiyu Li and Zhouhan Lin },
  journal={arXiv preprint arXiv:2505.20674},
  year={ 2025 }
}
Comments on this paper