2
0

InnateCoder: Learning Programmatic Options with Foundation Models

Abstract

Outside of transfer learning settings, reinforcement learning agents start their learning process from a clean slate. As a result, such agents have to go through a slow process to learn even the most obvious skills required to solve a problem. In this paper, we present InnateCoder, a system that leverages human knowledge encoded in foundation models to provide programmatic policies that encode "innate skills" in the form of temporally extended actions, or options. In contrast to existing approaches to learning options, InnateCoder learns them from the general human knowledge encoded in foundation models in a zero-shot setting, and not from the knowledge the agent gains by interacting with the environment. Then, InnateCoder searches for a programmatic policy by combining the programs encoding these options into larger and more complex programs. We hypothesized that InnateCoder's way of learning and using options could improve the sampling efficiency of current methods for learning programmatic policies. Empirical results in MicroRTS and Karel the Robot support our hypothesis, since they show that InnateCoder is more sample efficient than versions of the system that do not use options or learn them from experience.

View on arXiv
@article{moraes2025_2505.12508,
  title={ InnateCoder: Learning Programmatic Options with Foundation Models },
  author={ Rubens O. Moraes and Quazi Asif Sadmine and Hendrik Baier and Levi H. S. Lelis },
  journal={arXiv preprint arXiv:2505.12508},
  year={ 2025 }
}
Comments on this paper