10
0

Common Benchmarks Undervalue the Generalization Power of Programmatic Policies

Main:9 Pages
8 Figures
Bibliography:2 Pages
17 Tables
Appendix:6 Pages
Abstract

Algorithms for learning programmatic representations for sequential decision-making problems are often evaluated on out-of-distribution (OOD) problems, with the common conclusion that programmatic policies generalize better than neural policies on OOD problems. In this position paper, we argue that commonly used benchmarks undervalue the generalization capabilities of programmatic representations. We analyze the experiments of four papers from the literature and show that neural policies, which were shown not to generalize, can generalize as effectively as programmatic policies on OOD problems. This is achieved with simple changes in the neural policies training pipeline. Namely, we show that simpler neural architectures with the same type of sparse observation used with programmatic policies can help attain OOD generalization. Another modification we have shown to be effective is the use of reward functions that allow for safer policies (e.g., agents that drive slowly can generalize better). Also, we argue for creating benchmark problems highlighting concepts needed for OOD generalization that may challenge neural policies but align with programmatic representations, such as tasks requiring algorithmic constructs like stacks.

View on arXiv
@article{rajabpour2025_2506.14162,
  title={ Common Benchmarks Undervalue the Generalization Power of Programmatic Policies },
  author={ Amirhossein Rajabpour and Kiarash Aghakasiri and Sandra Zilles and Levi H. S. Lelis },
  journal={arXiv preprint arXiv:2506.14162},
  year={ 2025 }
}
Comments on this paper