ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.00446
11
47

A Greedy Approach to Adapting the Trace Parameter for Temporal Difference Learning

2 July 2016
Martha White
Adam White
ArXivPDFHTML
Abstract

One of the main obstacles to broad application of reinforcement learning methods is the parameter sensitivity of our core learning algorithms. In many large-scale applications, online computation and function approximation represent key strategies in scaling up reinforcement learning algorithms. In this setting, we have effective and reasonably well understood algorithms for adapting the learning-rate parameter, online during learning. Such meta-learning approaches can improve robustness of learning and enable specialization to current task, improving learning speed. For temporal-difference learning algorithms which we study here, there is yet another parameter, λ\lambdaλ, that similarly impacts learning speed and stability in practice. Unfortunately, unlike the learning-rate parameter, λ\lambdaλ parametrizes the objective function that temporal-difference methods optimize. Different choices of λ\lambdaλ produce different fixed-point solutions, and thus adapting λ\lambdaλ online and characterizing the optimization is substantially more complex than adapting the learning-rate parameter. There are no meta-learning method for λ\lambdaλ that can achieve (1) incremental updating, (2) compatibility with function approximation, and (3) maintain stability of learning under both on and off-policy sampling. In this paper we contribute a novel objective function for optimizing λ\lambdaλ as a function of state rather than time. We derive a new incremental, linear complexity λ\lambdaλ-adaption algorithm that does not require offline batch updating or access to a model of the world, and present a suite of experiments illustrating the practicality of our new algorithm in three different settings. Taken together, our contributions represent a concrete step towards black-box application of temporal-difference learning methods in real world problems.

View on arXiv
Comments on this paper