ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10554
19
0

Beyond Áha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models

15 May 2025
Zhiyuan Hu
Yali Wang
Hanze Dong
Yuhui Xu
Amrita Saha
Caiming Xiong
Bryan Hooi
Junnan Li
    LRM
ArXivPDFHTML
Abstract

Large reasoning models (LRMs) already possess a latent capacity for long chain-of-thought reasoning. Prior work has shown that outcome-based reinforcement learning (RL) can incidentally elicit advanced reasoning behaviors such as self-correction, backtracking, and verification phenomena often referred to as the model's "aha moment". However, the timing and consistency of these emergent behaviors remain unpredictable and uncontrollable, limiting the scalability and reliability of LRMs' reasoning capabilities. To address these limitations, we move beyond reliance on prompts and coincidental "aha moments". Instead, we explicitly align models with three meta-abilities: deduction, induction, and abduction, using automatically generated, self-verifiable tasks. Our three stage-pipeline individual alignment, parameter-space merging, and domain-specific reinforcement learning, boosting performance by over 10\% relative to instruction-tuned baselines. Furthermore, domain-specific RL from the aligned checkpoint yields an additional 2\% average gain in the performance ceiling across math, coding, and science benchmarks, demonstrating that explicit meta-ability alignment offers a scalable and dependable foundation for reasoning. Code is available at:this https URL

View on arXiv
@article{hu2025_2505.10554,
  title={ Beyond Áha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models },
  author={ Zhiyuan Hu and Yibo Wang and Hanze Dong and Yuhui Xu and Amrita Saha and Caiming Xiong and Bryan Hooi and Junnan Li },
  journal={arXiv preprint arXiv:2505.10554},
  year={ 2025 }
}
Comments on this paper