ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.01723
4
2

Using Logical Specifications of Objectives in Multi-Objective Reinforcement Learning

3 October 2019
Max Ploner
Anand Balakrishnan
Sebastian Pohl
David Wingate
ArXivPDFHTML
Abstract

It is notoriously difficult to control the behavior of reinforcement learning agents. Agents often learn to exploit the environment or reward signal and need to be retrained multiple times. The multi-objective reinforcement learning (MORL) framework separates a reward function into several objectives. An ideal MORL agent learns to generalize to novel combinations of objectives allowing for better control of an agent's behavior without requiring retraining. Many MORL approaches use a weight vector to parameterize the importance of each objective. However, this approach suffers from lack of expressiveness and interpretability. We propose using propositional logic to specify the importance of multiple objectives. By using a logic where predicates correspond directly to objectives, specifications are inherently more interpretable. Additionally the set of specifications that can be expressed with formal languages is a superset of what can be expressed by weight vectors. In this paper, we define a formal language based on propositional logic with quantitative semantics. We encode logical specifications using a recurrent neural network and show that MORL agents parameterized by these encodings are able to generalize to novel specifications over objectives and achieve performance comparable to single objective baselines.

View on arXiv
Comments on this paper