Reinforced Latent Reasoning for LLM-based Recommendation

Large Language Models (LLMs) have demonstrated impressive reasoning capabilities in complex problem-solving tasks, sparking growing interest in their application to preference reasoning in recommendation systems. Existing methods typically rely on fine-tuning with explicit chain-of-thought (CoT) data. However, these methods face significant practical limitations due to (1) the difficulty of obtaining high-quality CoT data in recommendation and (2) the high inference latency caused by generating CoT reasoning. In this work, we explore an alternative approach that shifts from explicit CoT reasoning to compact, information-dense latent reasoning. This approach eliminates the need for explicit CoT generation and improves inference efficiency, as a small set of latent tokens can effectively capture the entire reasoning process. Building on this idea, we propose (LatentR), a novel end-to-end training framework that leverages reinforcement learning (RL) to optimize latent reasoning without relying on any CoTthis http URL adopts a two-stage training strategy: first, supervised fine-tuning to initialize the latent reasoning module, followed by pure RL training to encourage exploration through a rule-based reward design. Our RL implementation is based on a modified GRPO algorithm, which reduces computational overhead during training and introduces continuous reward signals for more efficient learning. Extensive experiments demonstrate that LatentR enables effective latent reasoning without any direct supervision of the reasoning process, significantly improving performance when integrated with different LLM-based recommendation methods. Our codes are available atthis https URL.
View on arXiv@article{zhang2025_2505.19092, title={ Reinforced Latent Reasoning for LLM-based Recommendation }, author={ Yang Zhang and Wenxin Xu and Xiaoyan Zhao and Wenjie Wang and Fuli Feng and Xiangnan He and Tat-Seng Chua }, journal={arXiv preprint arXiv:2505.19092}, year={ 2025 } }