Joint Optimization of Multi-Objective Reinforcement Learning with Policy Gradient Based Algorithm

Abstract
Many engineering problems have multiple objectives, and the overall aim is to optimize a non-linear function of these objectives. In this paper, we formulate the problem of maximizing a non-linear concave function of multiple long-term objectives. A policy-gradient based model-free algorithm is proposed for the problem. To compute an estimate of the gradient, a biased estimator is proposed. The proposed algorithm is shown to achieve convergence to within an of the global optima after sampling trajectories where is the discount factor and is the number of the agents, thus achieving the same dependence on as the policy gradient algorithm for the standard reinforcement learning.
View on arXivComments on this paper