38
v1v2 (latest)

Step 3.5 Flash: Open Frontier-Level Intelligence with 11B Active Parameters

Ailin Huang
Ang Li
Aobo Kong
Bin Wang
Binxing Jiao
Bo Dong
Bojun Wang
Boyu Chen
Brian Li
Buyun Ma
Chang Su
Changxin Miao
Changyi Wan
Chao Lou
Chen Hu
Chen Xu
Chenfeng Yu
Chengting Feng
Chengyuan Yao
Chunrui Han
Dan Ma
Dapeng Shi
Daxin Jiang
Dehua Ma
Deshan Sun
Di Qi
Enle Liu
Fajie Zhang
Fanqi Wan
Guanzhe Huang
Gulin Yan
Guoliang Cao
Guopeng Li
Han Cheng
Hangyu Guo
Hanshan Zhang
Hao Nie
Haonan Jia
Haoran Lv
Hebin Zhou
Hekun Lv
Heng Wang
Heung-Yeung Shum
Hongbo Huang
Hongbo Peng
Hongyu Zhou
Hongyuan Wang
Houyong Chen
Huangxi Zhu
Huimin Wu
Huiyong Guo
Jia Wang
Jian Zhou
Jianjian Sun
Jiaoren Wu
Jiaran Zhang
Jiashu Lv
Jiashuo Liu
Jiayi Fu
Jiayu Liu
Jie Cheng
Jie Luo
Jie Yang
Jie Zhou
Jieyi Hou
Jing Bai
Jingcheng Hu
Jingjing Xie
Jingwei Wu
Jingyang Zhang
Jishi Zhou
Junfeng Liu
Junzhe Lin
Ka Man Lo
Kai Liang
Kaibo Liu
Kaijun Tan
Kaiwen Yan
Kaixiang Li
Kang An
Kangheng Lin
Lei Yang
Liang Lv
Liang Zhao
Liangyu Chen
Lieyu Shi
Liguo Tan
Lin Lin
Lina Chen
Luck Ma
Mengqiang Ren
Michael Li
Ming Li
Mingliang Li
Mingming Zhang
Mingrui Chen
Mitt Huang
Na Wang
Peng Liu
Qi Han
Main:54 Pages
10 Figures
Bibliography:13 Pages
26 Tables
Abstract

We introduce Step 3.5 Flash, a sparse Mixture-of-Experts (MoE) model that bridges frontier-level agentic intelligence and computational efficiency. We focus on what matters most when building agents: sharp reasoning and fast, reliable execution. Step 3.5 Flash pairs a 196B-parameter foundation with 11B active parameters for efficient inference. It is optimized with interleaved 3:1 sliding-window/full attention and Multi-Token Prediction (MTP-3) to reduce the latency and cost of multi-round agentic interactions. To reach frontier-level intelligence, we design a scalable reinforcement learning framework that combines verifiable signals with preference feedback, while remaining stable under large-scale off-policy training, enabling consistent self-improvement across mathematics, code, and tool use. Step 3.5 Flash demonstrates strong performance across agent, coding, and math tasks, achieving 85.4% on IMO-AnswerBench, 86.4% on LiveCodeBench-v6 (2024.08-2025.05), 88.2% on tau2-Bench, 69.0% on BrowseComp (with context management), and 51.0% on Terminal-Bench 2.0, comparable to frontier models such as GPT-5.2 xHigh and Gemini 3.0 Pro. By redefining the efficiency frontier, Step 3.5 Flash provides a high-density foundation for deploying sophisticated agents in real-world industrial environments.

View on arXiv
Comments on this paper