Deep Symbolic Optimization (DSO) is a novel computational framework that enables symbolic optimization for scientific discovery, particularly in applications involving the search for intricate symbolic structures. One notable example is equation discovery, which aims to automatically derive mathematical models expressed in symbolic form. In DSO, the discovery process is formulated as a sequential decision-making task. A generative neural network learns a probabilistic model over a vast space of candidate symbolic expressions, while reinforcement learning strategies guide the search toward the most promising regions. This approach integrates gradient-based optimization with evolutionary and local search techniques, and it incorporates in-situ constraints, domain-specific priors, and advanced policy optimization methods. The result is a robust framework capable of efficiently exploring extensive search spaces to identify interpretable and physically meaningful models. Extensive evaluations on benchmark problems have demonstrated that DSO achieves state-of-the-art performance in both accuracy and interpretability. In this chapter, we provide a comprehensive overview of the DSO framework and illustrate its transformative potential for automating symbolic optimization in scientific discovery.
View on arXiv@article{hayes2025_2505.10762, title={ Deep Symbolic Optimization: Reinforcement Learning for Symbolic Mathematics }, author={ Conor F. Hayes and Felipe Leno Da Silva and Jiachen Yang and T. Nathan Mundhenk and Chak Shing Lee and Jacob F. Pettit and Claudio Santiago and Sookyung Kim and Joanne T. Kim and Ignacio Aravena Solis and Ruben Glatt and Andre R. Goncalves and Alexander Ladd and Ahmet Can Solak and Thomas Desautels and Daniel Faissol and Brenden K. Petersen and Mikel Landajuela }, journal={arXiv preprint arXiv:2505.10762}, year={ 2025 } }