ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.08862
  4. Cited By
Breaking the Deadly Triad with a Target Network

Breaking the Deadly Triad with a Target Network

21 January 2021
Shangtong Zhang
Hengshuai Yao
Shimon Whiteson
    AAML
ArXivPDFHTML

Papers citing "Breaking the Deadly Triad with a Target Network"

14 / 14 papers shown
Title
Simplifying Deep Temporal Difference Learning
Simplifying Deep Temporal Difference Learning
Matteo Gallici
Mattie Fellows
Benjamin Ellis
B. Pou
Ivan Masmitja
Jakob Foerster
Mario Martin
OffRL
62
17
0
05 Jul 2024
Regularized Q-Learning with Linear Function Approximation
Regularized Q-Learning with Linear Function Approximation
Jiachen Xi
Alfredo Garcia
P. Momcilovic
38
2
0
26 Jan 2024
TD Convergence: An Optimization Perspective
TD Convergence: An Optimization Perspective
Kavosh Asadi
Shoham Sabach
Yao Liu
Omer Gottesman
Rasool Fakoor
MU
25
8
0
30 Jun 2023
Why Target Networks Stabilise Temporal Difference Methods
Why Target Networks Stabilise Temporal Difference Methods
Matt Fellows
Matthew Smith
Shimon Whiteson
OOD
AAML
21
7
0
24 Feb 2023
Bridging the Gap Between Target Networks and Functional Regularization
Alexandre Piché
Valentin Thomas
Joseph Marino
Rafael Pardiñas
Gian Maria Marconi
C. Pal
Mohammad Emtiyaz Khan
14
1
0
21 Oct 2022
Target Network and Truncation Overcome The Deadly Triad in $Q$-Learning
Target Network and Truncation Overcome The Deadly Triad in QQQ-Learning
Zaiwei Chen
John-Paul Clarke
S. T. Maguluri
28
19
0
05 Mar 2022
On the Convergence of SARSA with Linear Function Approximation
On the Convergence of SARSA with Linear Function Approximation
Shangtong Zhang
Rémi Tachet des Combes
Romain Laroche
26
10
0
14 Feb 2022
Regularized Q-learning
Regularized Q-learning
Han-Dong Lim
Donghwan Lee
27
10
0
11 Feb 2022
The Impact of Data Distribution on Q-learning with Function
  Approximation
The Impact of Data Distribution on Q-learning with Function Approximation
Pedro P. Santos
Diogo S. Carvalho
Alberto Sardinha
Francisco S. Melo
OffRL
19
2
0
23 Nov 2021
Global Optimality and Finite Sample Analysis of Softmax Off-Policy Actor Critic under State Distribution Mismatch
Global Optimality and Finite Sample Analysis of Softmax Off-Policy Actor Critic under State Distribution Mismatch
Shangtong Zhang
Rémi Tachet des Combes
Romain Laroche
35
10
0
04 Nov 2021
Convergent and Efficient Deep Q Network Algorithm
Convergent and Efficient Deep Q Network Algorithm
Zhikang T. Wang
Masahito Ueda
27
12
0
29 Jun 2021
Analysis of a Target-Based Actor-Critic Algorithm with Linear Function
  Approximation
Analysis of a Target-Based Actor-Critic Algorithm with Linear Function Approximation
Anas Barakat
Pascal Bianchi
Julien Lehmann
32
9
0
14 Jun 2021
A Finite Time Analysis of Two Time-Scale Actor Critic Methods
A Finite Time Analysis of Two Time-Scale Actor Critic Methods
Yue Wu
Weitong Zhang
Pan Xu
Quanquan Gu
90
146
0
04 May 2020
Optimism in Reinforcement Learning with Generalized Linear Function
  Approximation
Optimism in Reinforcement Learning with Generalized Linear Function Approximation
Yining Wang
Ruosong Wang
S. Du
A. Krishnamurthy
135
135
0
09 Dec 2019
1