63
80

Particle Swarm Optimization for Generating Fuzzy Reinforcement Learning Policies

Abstract

Fuzzy controllers are known to serve as efficient and interpretable system controllers for continuous state and action spaces. To date these controllers have been constructed manually, or automatically trained either on expert-generated problem-specific cost functions or by incorporating detailed knowledge about the optimal control strategy. Both requirements for automatic training processes are not to be found in the majority of real-world reinforcement learning (RL) problems. In such applications online learning is often prohibited due to safety reasons, since it requires exploration on the problem's dynamics during policy training.We introduce a new fuzzy particle swarm reinforcement learning (FPSRL) approach, which is capable of constructing fuzzy RL policies solely by training parameters on world models, which simulate the real system dynamics. These world models are created by an autonomous machine learning technique using previously generated transition samples of the real system. This approach interrelates self-organizing fuzzy controllers to model-based batch RL for the first time. Therefore, FPSRL is intended to solve problems in domains, where online learning is forbidden, the system dynamics are rather easy to model from previously generated default policy transition samples, and it is expected that a relatively easy interpretable control policy exists.The FPSRL's efficiency on problems from such domains is demonstrated on three standard RL benchmarks: mountain car, cart pole balancing and cart pole swing up. Our experiments yielded high-performing and interpretable fuzzy policies.

View on arXiv
Comments on this paper