302
v1v2 (latest)

POPCORN: Partially Observed Prediction COnstrained ReiNforcement Learning

International Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Abstract

Many medical decision-making tasks can be framed as partially observed Markov decision processes (POMDPs). However, prevailing two-stage approaches that first learn a POMDP and then solve it often fail because the model that best fits the data may not be well suited for planning. We introduce a new optimization objective that (a) produces both high-performing policies and high-quality generative models, even when some observations are irrelevant for planning, and (b) does so in batch off-policy settings that are typical in healthcare, when only retrospective data is available. We demonstrate our approach on synthetic examples and a challenging medical decision-making problem.

View on arXiv
Comments on this paper