Robust Reinforcement Learning for Discrete Compositional Generation via General Soft Operators
- OffRL

A major bottleneck in scientific discovery involves narrowing a large combinatorial set of objects, such as proteins or molecules, to a small set of promising candidates. While this process largely relies on expert knowledge, recent methods leverage reinforcement learning (RL) to enhance this filtering. They achieve this by estimating proxy reward functions from available datasets and using regularization to generate more diverse candidates. These reward functions are inherently uncertain, raising a particularly salient challenge for scientific discovery. In this work, we show that existing methods, often framed as sampling proportional to a reward function, are inadequate and yield suboptimal candidates, especially in large search spaces. To remedy this issue, we take a robust RL approach and introduce a unified operator that seeks robustness to the uncertainty of the proxy reward function. This general operator targets peakier sampling distributions while encompassing known soft RL operators. It also leads us to a novel algorithm that identifies higher-quality, diverse candidates in both synthetic and real-world tasks. Ultimately, our work offers a new, flexible perspective on discrete compositional generation tasks. Code:this https URL.
View on arXiv@article{jiralerspong2025_2506.17007, title={ Robust Reinforcement Learning for Discrete Compositional Generation via General Soft Operators }, author={ Marco Jiralerspong and Esther Derman and Danilo Vucetic and Nikolay Malkin and Bilun Sun and Tianyu Zhang and Pierre-Luc Bacon and Gauthier Gidel }, journal={arXiv preprint arXiv:2506.17007}, year={ 2025 } }