ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13502
17
0

BOW: Bottlenecked Next Word Exploration

16 June 2025
Ming shen
Zhikun Xu
Xiao Ye
Jacob Dineen
Ben Zhou
    OffRLLRM
ArXiv (abs)PDFHTML
Main:9 Pages
9 Figures
Bibliography:3 Pages
8 Tables
Appendix:3 Pages
Abstract

Large language models (LLMs) are typically trained via next-word prediction (NWP), which provides strong surface-level fluency but often lacks support for robust reasoning. We propose BOttlenecked next Word exploration (BOW), a novel RL framework that rethinks NWP by introducing a reasoning bottleneck where a policy model first generates a reasoning path rather than predicting the next token directly, after which a frozen judge model predicts the next token distribution based solely on this reasoning path. We train the policy model using GRPO with rewards that quantify how effectively the reasoning path facilitates next-word recovery. Compared with other continual pretraining baselines, we show that BOW improves both the general and next-word reasoning capabilities of the base model, evaluated on various benchmarks. Our findings show that BOW can serve as an effective and scalable alternative to vanilla NWP.

View on arXiv
@article{shen2025_2506.13502,
  title={ BOW: Bottlenecked Next Word Exploration },
  author={ Ming Shen and Zhikun Xu and Xiao Ye and Jacob Dineen and Ben Zhou },
  journal={arXiv preprint arXiv:2506.13502},
  year={ 2025 }
}
Comments on this paper