8
7

Joint Optimization of Multi-Objective Reinforcement Learning with Policy Gradient Based Algorithm

Abstract

Many engineering problems have multiple objectives, and the overall aim is to optimize a non-linear function of these objectives. In this paper, we formulate the problem of maximizing a non-linear concave function of multiple long-term objectives. A policy-gradient based model-free algorithm is proposed for the problem. To compute an estimate of the gradient, a biased estimator is proposed. The proposed algorithm is shown to achieve convergence to within an ϵ\epsilon of the global optima after sampling O(M4σ2(1γ)8ϵ4)\mathcal{O}(\frac{M^4\sigma^2}{(1-\gamma)^8\epsilon^4}) trajectories where γ\gamma is the discount factor and MM is the number of the agents, thus achieving the same dependence on ϵ\epsilon as the policy gradient algorithm for the standard reinforcement learning.

View on arXiv
Comments on this paper