19

CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in Production

Yixin Nie
Lin Guan
Zhongyao Ma
Anchit Gupta
Yipin Zhou
Xiao Li
Zhengping Zhou
Raymond Zeng
Gelin Zhou
Shigan Chu
Ajay Thampi
Wancen Mu
Nathan Shuster
Ketong Wang
Lin Chen
Jason Brewer
Derek Hao Hu
Alexander McCauley
Jason Weston
Sem Park
Na Zhang
Kevin Tang
Main:24 Pages
7 Figures
Bibliography:4 Pages
9 Tables
Appendix:2 Pages
Abstract

This report presents CharacterFlywheel, an iterative flywheel process for improving large language models (LLMs) in production social chat applications across Instagram, WhatsApp, and Messenger. Starting from LLaMA 3.1, we refined models across 15 generations using data from both internal and external real-user traffic. Through continuous deployments from July 2024 to April 2025, we conducted controlled 7-day A/B tests showing consistent engagement improvements: 7 of 8 newly deployed models demonstrated positive lift over the baseline, with the strongest performers achieving up to 8.8% improvement in engagement breadth and 19.4% in engagement depth. We also observed substantial gains in steerability, with instruction following increasing from 59.2% to 84.8% and instruction violations decreasing from 26.6% to 5.8%. We detail the CharacterFlywheel process which integrates data curation, reward modeling to estimate and interpolate the landscape of engagement metrics, supervised fine-tuning (SFT), reinforcement learning (RL), and both offline and online evaluation to ensure reliable progress at each optimization step. We also discuss our methods for overfitting prevention and navigating production dynamics at scale. These contributions advance the scientific rigor and understanding of LLMs in social applications serving millions of users.

View on arXiv
Comments on this paper