Steering Generative Models with Experimental Data for Protein Fitness Optimization

Protein fitness optimization involves finding a protein sequence that maximizes desired quantitative properties in a combinatorially large design space of possible sequences. Recent developments in steering protein generative models (e.g diffusion models, language models) offer a promising approach. However, by and large, past studies have optimized surrogate rewards and/or utilized large amounts of labeled data for steering, making it unclear how well existing methods perform and compare to each other in real-world optimization campaigns where fitness is measured by low-throughput wet-lab assays. In this study, we explore fitness optimization using small amounts (hundreds) of labeled sequence-fitness pairs and comprehensively evaluate strategies such as classifier guidance and posterior sampling for guiding generation from different discrete diffusion models of protein sequences. We also demonstrate how guidance can be integrated into adaptive sequence selection akin to Thompson sampling in Bayesian optimization, showing that plug-and-play guidance strategies offer advantages compared to alternatives such as reinforcement learning with protein language models.
View on arXiv@article{yang2025_2505.15093, title={ Steering Generative Models with Experimental Data for Protein Fitness Optimization }, author={ Jason Yang and Wenda Chu and Daniel Khalil and Raul Astudillo and Bruce J. Wittmann and Frances H. Arnold and Yisong Yue }, journal={arXiv preprint arXiv:2505.15093}, year={ 2025 } }