Though neural networks trained on large data sets have been successfully used to describe and predict many physical phenomena, there is a sense among scientists that, unlike traditional scientific models, where relationships come packaged in the form of simple mathematical expressions, the findings of the neural network cannot be integrated into the body of scientific knowledge. Critics of ML's inability to produce human-understandable relationships have converged on the concept of "interpretability" as its point of departure from more traditional forms of science. As the growing interest in interpretability has shown, researchers in the physical sciences seek not just predictive models, but also to uncover the fundamental principles that govern a system of interest. However, clarity around a definition of interpretability and the precise role that it plays in science is lacking in the literature. In this work, we argue that researchers in equation discovery and symbolic regression tend to conflate the concept of sparsity with interpretability. We review key papers on interpretable ML from outside the scientific community and argue that, though the definitions and methods they propose can inform questions of interpretability for SciML, they are inadequate for this new purpose. Noting these deficiencies, we propose an operational definition of interpretability for the physical sciences. Our notion of interpretability emphasizes understanding of the mechanism over mathematical sparsity. Innocuous though it may seem, this emphasis on mechanism shows that sparsity is often unnecessary. It also questions the possibility of interpretable scientific discovery when prior knowledge is lacking. We believe a precise and philosophically informed definition of interpretability in SciML will help focus research efforts toward the most significant obstacles to realizing a data-driven scientific future.
View on arXiv@article{rowan2025_2505.13510, title={ On the definition and importance of interpretability in scientific machine learning }, author={ Conor Rowan and Alireza Doostan }, journal={arXiv preprint arXiv:2505.13510}, year={ 2025 } }