16
0

Bigger, Regularized, Categorical: High-Capacity Value Functions are Efficient Multi-Task Learners

Abstract

Recent advances in language modeling and vision stem from training large models on diverse, multi-task data. This paradigm has had limited impact in value-based reinforcement learning (RL), where improvements are often driven by small models trained in a single-task context. This is because in multi-task RL sparse rewards and gradient conflicts make optimization of temporal difference brittle. Practical workflows for generalist policies therefore avoid online training, instead cloning expert trajectories or distilling collections of single-task policies into one agent. In this work, we show that the use of high-capacity value models trained via cross-entropy and conditioned on learnable task embeddings addresses the problem of task interference in online RL, allowing for robust and scalable multi-task training. We test our approach on 7 multi-task benchmarks with over 280 unique tasks, spanning high degree-of-freedom humanoid control and discrete vision-based RL. We find that, despite its simplicity, the proposed approach leads to state-of-the-art single and multi-task performance, as well as sample-efficient transfer to new tasks.

View on arXiv
@article{nauman2025_2505.23150,
  title={ Bigger, Regularized, Categorical: High-Capacity Value Functions are Efficient Multi-Task Learners },
  author={ Michal Nauman and Marek Cygan and Carmelo Sferrazza and Aviral Kumar and Pieter Abbeel },
  journal={arXiv preprint arXiv:2505.23150},
  year={ 2025 }
}
Comments on this paper