52
0

Divide-Fuse-Conquer: Eliciting "Aha Moments" in Multi-Scenario Games

Abstract

Large language models (LLMs) have been observed to suddenly exhibit advanced reasoning abilities during reinforcement learning (RL), resembling an ``aha moment'' triggered by simple outcome-based rewards. While RL has proven effective in eliciting such breakthroughs in tasks involving mathematics, coding, and vision, it faces significant challenges in multi-scenario games. The diversity of game rules, interaction modes, and environmental complexities often leads to policies that perform well in one scenario but fail to generalize to others. Simply combining multiple scenarios during training introduces additional challenges, such as training instability and poor performance. To overcome these challenges, we propose Divide-Fuse-Conquer, a framework designed to enhance generalization in multi-scenario RL. This approach starts by heuristically grouping games based on characteristics such as rules and difficulties. Specialized models are then trained for each group to excel at games in the group is what we refer to as the divide step. Next, we fuse model parameters from different groups as a new model, and continue training it for multiple groups, until the scenarios in all groups are conquered. Experiments across 18 TextArena games show that Qwen2.5-32B-Align trained with the Divide-Fuse-Conquer strategy reaches a performance level comparable to Claude3.5, achieving 7 wins and 4 draws. We hope our approach can inspire future research on using reinforcement learning to improve the generalization of LLMs.

View on arXiv
@article{zhang2025_2505.16401,
  title={ Divide-Fuse-Conquer: Eliciting "Aha Moments" in Multi-Scenario Games },
  author={ Xiaoqing Zhang and Huabin Zheng and Ang Lv and Yuhan Liu and Zirui Song and Flood Sung and Xiuying Chen and Rui Yan },
  journal={arXiv preprint arXiv:2505.16401},
  year={ 2025 }
}
Comments on this paper