12
0

Hunyuan-TurboS: Advancing Large Language Models through Mamba-Transformer Synergy and Adaptive Chain-of-Thought

Abstract

As Large Language Models (LLMs) rapidly advance, we introduce Hunyuan-TurboS, a novel large hybrid Transformer-Mamba Mixture of Experts (MoE) model. It synergistically combines Mamba's long-sequence processing efficiency with Transformer's superior contextual understanding. Hunyuan-TurboS features an adaptive long-short chain-of-thought (CoT) mechanism, dynamically switching between rapid responses for simple queries and deep "thinking" modes for complex problems, optimizing computational resources. Architecturally, this 56B activated (560B total) parameter model employs 128 layers (Mamba2, Attention, FFN) with an innovative AMF/MF block pattern. Faster Mamba2 ensures linear complexity, Grouped-Query Attention minimizes KV cache, and FFNs use an MoE structure. Pre-trained on 16T high-quality tokens, it supports a 256K context length and is the first industry-deployed large-scale Mamba model. Our comprehensive post-training strategy enhances capabilities via Supervised Fine-Tuning (3M instructions), a novel Adaptive Long-short CoT Fusion method, Multi-round Deliberation Learning for iterative improvement, and a two-stage Large-scale Reinforcement Learning process targeting STEM and general instruction-following. Evaluations show strong performance: overall top 7 rank on LMSYS Chatbot Arena with a score of 1356, outperforming leading models like Gemini-2.0-Flash-001 (1352) and o4-mini-2025-04-16 (1345). TurboS also achieves an average of 77.9% across 23 automated benchmarks. Hunyuan-TurboS balances high performance and efficiency, offering substantial capabilities at lower inference costs than many reasoning models, establishing a new paradigm for efficient large-scale pre-trained models.

View on arXiv
@article{team2025_2505.15431,
  title={ Hunyuan-TurboS: Advancing Large Language Models through Mamba-Transformer Synergy and Adaptive Chain-of-Thought },
  author={ Tencent Hunyuan Team and Ao Liu and Botong Zhou and Can Xu and Chayse Zhou and ChenChen Zhang and Chengcheng Xu and Chenhao Wang and Decheng Wu and Dengpeng Wu and Dian Jiao and Dong Du and Dong Wang and Feng Zhang and Fengzong Lian and Guanghui Xu and Guanwei Zhang and Hai Wang and Haipeng Luo and Han Hu and Huilin Xu and Jiajia Wu and Jianchen Zhu and Jianfeng Yan and Jiaqi Zhu and Jihong Zhang and Jinbao Xue and Jun Xia and Junqiang Zheng and Kai Liu and Kai Zhang and Kai Zheng and Kejiao Li and Keyao Wang and Lan Jiang and Lixin Liu and Lulu Wu and Mengyuan Huang and Peijie Yu and Peiqi Wang and Qian Wang and Qianbiao Xiang and Qibin Liu and Qingfeng Sun and Richard Guo and Ruobing Xie and Saiyong Yang and Shaohua Chen and Shihui Hu and Shuai Li and Shuaipeng Li and Shuang Chen and Suncong Zheng and Tao Yang and Tian Zhang and Tinghao Yu and Weidong Han and Weijie Liu and Weijin Zhou and Weikang Wang and Wesleye Chen and Xiao Feng and Xiaoqin Ren and Xingwu Sun and Xiong Kuang and Xuemeng Huang and Xun Cao and Yanfeng Chen and Yang Du and Yang Zhen and Yangyu Tao and Yaping Deng and Yi Shen and Yigeng Hong and Yiqi Chen and Yiqing Huang and Yuchi Deng and Yue Mao and Yulong Wang and Yuyuan Zeng and Zenan Xu and Zhanhui Kang and Zhe Zhao and ZhenXiang Yan and Zheng Fang and Zhichao Hu and Zhongzhi Chen and Zhuoyu Li and Zongwei Li and Alex Yan and Ande Liang and Baitong Liu and Beiping Pan and Bin Xing and Binghong Wu and Bingxin Qu and Bolin Ni and Boyu Wu and Chen Li and Cheng Jiang },
  journal={arXiv preprint arXiv:2505.15431},
  year={ 2025 }
}
Comments on this paper