ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.07802
  4. Cited By
The Primacy Bias in Deep Reinforcement Learning

The Primacy Bias in Deep Reinforcement Learning

16 May 2022
Evgenii Nikishin
Max Schwarzer
P. DÓro
Pierre-Luc Bacon
Aaron C. Courville
    OnRL
ArXivPDFHTML

Papers citing "The Primacy Bias in Deep Reinforcement Learning"

43 / 43 papers shown
Title
Hyperspherical Normalization for Scalable Deep Reinforcement Learning
Hyperspherical Normalization for Scalable Deep Reinforcement Learning
Hojoon Lee
Youngdo Lee
Takuma Seno
Donghu Kim
Peter Stone
Jaegul Choo
63
1
0
24 Feb 2025
Learning to Sample Effective and Diverse Prompts for Text-to-Image Generation
Learning to Sample Effective and Diverse Prompts for Text-to-Image Generation
Taeyoung Yun
Dinghuai Zhang
Jinkyoo Park
Ling Pan
DiffM
84
2
0
17 Feb 2025
Breaking the Reclustering Barrier in Centroid-based Deep Clustering
Breaking the Reclustering Barrier in Centroid-based Deep Clustering
Lukas Miklautz
Timo Klein
Kevin Sidak
Collin Leiber
Thomas Lang
Andrii Shkabrii
Sebastian Tschiatschek
Claudia Plant
34
0
0
04 Nov 2024
Prioritized Generative Replay
Prioritized Generative Replay
Renhao Wang
Kevin Frans
Pieter Abbeel
Sergey Levine
Alexei A. Efros
OnRL
DiffM
114
2
0
23 Oct 2024
Uncovering RL Integration in SSL Loss: Objective-Specific Implications for Data-Efficient RL
Uncovering RL Integration in SSL Loss: Objective-Specific Implications for Data-Efficient RL
Ömer Veysel Çağatan
Barış Akgün
OffRL
34
0
0
22 Oct 2024
MAD-TD: Model-Augmented Data stabilizes High Update Ratio RL
MAD-TD: Model-Augmented Data stabilizes High Update Ratio RL
C. Voelcker
Marcel Hussing
Eric Eaton
Amir-massoud Farahmand
Igor Gilitschenski
39
1
0
11 Oct 2024
Neuroplastic Expansion in Deep Reinforcement Learning
Neuroplastic Expansion in Deep Reinforcement Learning
Jiashun Liu
J. Obando-Ceron
Aaron C. Courville
L. Pan
39
3
0
10 Oct 2024
Don't flatten, tokenize! Unlocking the key to SoftMoE's efficacy in deep RL
Don't flatten, tokenize! Unlocking the key to SoftMoE's efficacy in deep RL
Ghada Sokar
J. Obando-Ceron
Aaron C. Courville
Hugo Larochelle
Pablo Samuel Castro
MoE
124
2
0
02 Oct 2024
Frequency and Generalisation of Periodic Activation Functions in Reinforcement Learning
Frequency and Generalisation of Periodic Activation Functions in Reinforcement Learning
Augustine N. Mavor-Parker
Matthew J. Sargent
Caswell Barry
Lewis D. Griffin
Clare Lyle
39
2
0
09 Jul 2024
Can Learned Optimization Make Reinforcement Learning Less Difficult?
Can Learned Optimization Make Reinforcement Learning Less Difficult?
Alexander David Goldie
Chris Xiaoxuan Lu
Matthew Jackson
Shimon Whiteson
Jakob N. Foerster
42
3
0
09 Jul 2024
Normalization and effective learning rates in reinforcement learning
Normalization and effective learning rates in reinforcement learning
Clare Lyle
Zeyu Zheng
Khimya Khetarpal
James Martens
H. V. Hasselt
Razvan Pascanu
Will Dabney
19
7
0
01 Jul 2024
Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement
  Learning
Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning
Aneesh Muppidi
Zhiyu Zhang
Heng Yang
34
4
0
26 May 2024
Bigger, Regularized, Optimistic: scaling for compute and
  sample-efficient continuous control
Bigger, Regularized, Optimistic: scaling for compute and sample-efficient continuous control
Michal Nauman
M. Ostaszewski
Krzysztof Jankowski
Piotr Milo's
Marek Cygan
OffRL
43
16
0
25 May 2024
The Curse of Diversity in Ensemble-Based Exploration
The Curse of Diversity in Ensemble-Based Exploration
Zhixuan Lin
P. DÓro
Evgenii Nikishin
Aaron C. Courville
42
1
0
07 May 2024
K-percent Evaluation for Lifelong RL
K-percent Evaluation for Lifelong RL
Golnaz Mesbahi
Parham Mohammad Panahi
Olya Mastikhina
Martha White
Adam White
CLL
OffRL
26
0
0
02 Apr 2024
Learning Off-policy with Model-based Intrinsic Motivation For Active
  Online Exploration
Learning Off-policy with Model-based Intrinsic Motivation For Active Online Exploration
Yibo Wang
Jiang Zhao
OffRL
OnRL
25
0
0
31 Mar 2024
Dissecting Deep RL with High Update Ratios: Combatting Value Divergence
Dissecting Deep RL with High Update Ratios: Combatting Value Divergence
Marcel Hussing
C. Voelcker
Igor Gilitschenski
Amir-massoud Farahmand
Eric Eaton
34
3
0
09 Mar 2024
Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter
  Lesson of Reinforcement Learning
Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning
Michal Nauman
Michal Bortkiewicz
Piotr Milo's
Tomasz Trzciñski
M. Ostaszewski
Marek Cygan
OffRL
22
17
0
01 Mar 2024
Think2Drive: Efficient Reinforcement Learning by Thinking in Latent
  World Model for Quasi-Realistic Autonomous Driving (in CARLA-v2)
Think2Drive: Efficient Reinforcement Learning by Thinking in Latent World Model for Quasi-Realistic Autonomous Driving (in CARLA-v2)
Qifeng Li
Xiaosong Jia
Shaobo Wang
Junchi Yan
27
27
0
26 Feb 2024
Q-Star Meets Scalable Posterior Sampling: Bridging Theory and Practice
  via HyperAgent
Q-Star Meets Scalable Posterior Sampling: Bridging Theory and Practice via HyperAgent
Yingru Li
Jiawei Xu
Lei Han
Zhi-Quan Luo
BDL
OffRL
26
6
0
05 Feb 2024
An Invitation to Deep Reinforcement Learning
An Invitation to Deep Reinforcement Learning
Bernhard Jaeger
Andreas Geiger
OffRL
OOD
78
5
0
13 Dec 2023
Directions of Curvature as an Explanation for Loss of Plasticity
Directions of Curvature as an Explanation for Loss of Plasticity
Alex Lewandowski
Haruto Tanaka
Dale Schuurmans
Marlos C. Machado
11
5
0
30 Nov 2023
On the Theory of Risk-Aware Agents: Bridging Actor-Critic and Economics
On the Theory of Risk-Aware Agents: Bridging Actor-Critic and Economics
Michal Nauman
Marek Cygan
35
1
0
30 Oct 2023
One is More: Diverse Perspectives within a Single Network for Efficient
  DRL
One is More: Diverse Perspectives within a Single Network for Efficient DRL
Yiqin Tan
Ling Pan
Longbo Huang
OffRL
35
0
0
21 Oct 2023
Offline Retraining for Online RL: Decoupled Policy Learning to Mitigate
  Exploration Bias
Offline Retraining for Online RL: Decoupled Policy Learning to Mitigate Exploration Bias
Max Sobol Mark
Archit Sharma
Fahim Tajwar
Rafael Rafailov
Sergey Levine
Chelsea Finn
OffRL
OnRL
28
1
0
12 Oct 2023
Reset It and Forget It: Relearning Last-Layer Weights Improves Continual
  and Transfer Learning
Reset It and Forget It: Relearning Last-Layer Weights Improves Continual and Transfer Learning
Lapo Frati
Neil Traft
Jeff Clune
Nick Cheney
CLL
21
0
0
12 Oct 2023
Maintaining Plasticity in Continual Learning via Regenerative
  Regularization
Maintaining Plasticity in Continual Learning via Regenerative Regularization
Saurabh Kumar
Henrik Marklund
Benjamin Van Roy
CLL
KELM
28
16
0
23 Aug 2023
Improving Language Plasticity via Pretraining with Active Forgetting
Improving Language Plasticity via Pretraining with Active Forgetting
Yihong Chen
Kelly Marchisio
Roberta Raileanu
David Ifeoluwa Adelani
Pontus Stenetorp
Sebastian Riedel
Mikel Artetx
KELM
AI4CE
CLL
28
23
0
03 Jul 2023
VIBR: Learning View-Invariant Value Functions for Robust Visual Control
VIBR: Learning View-Invariant Value Functions for Robust Visual Control
Tom Dupuis
Jaonary Rabarisoa
Q. C. Pham
David Filliat
36
0
0
14 Jun 2023
Symmetric Replay Training: Enhancing Sample Efficiency in Deep
  Reinforcement Learning for Combinatorial Optimization
Symmetric Replay Training: Enhancing Sample Efficiency in Deep Reinforcement Learning for Combinatorial Optimization
Hyeon-Seob Kim
Minsu Kim
Sungsoo Ahn
Jinkyoo Park
OffRL
39
7
0
02 Jun 2023
Bigger, Better, Faster: Human-level Atari with human-level efficiency
Bigger, Better, Faster: Human-level Atari with human-level efficiency
Max Schwarzer
J. Obando-Ceron
Aaron C. Courville
Marc G. Bellemare
Rishabh Agarwal
P. S. Castro
OffRL
43
82
0
30 May 2023
Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control
  via Sample Multiple Reuse
Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
Jiafei Lyu
Le Wan
Zongqing Lu
Xiu Li
OffRL
26
9
0
29 May 2023
Learning Better with Less: Effective Augmentation for Sample-Efficient
  Visual Reinforcement Learning
Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Guozheng Ma
Linrui Zhang
Haoyu Wang
Lu Li
Zilin Wang
Zhen Wang
Li Shen
Xueqian Wang
Dacheng Tao
42
10
0
25 May 2023
Efficient Quality-Diversity Optimization through Diverse Quality Species
Efficient Quality-Diversity Optimization through Diverse Quality Species
Ryan Wickman
Bibek Poudel
Taylor Michael Villarreal
Xiaofei Zhang
Weizi Li
23
6
0
14 Apr 2023
The Ladder in Chaos: A Simple and Effective Improvement to General DRL
  Algorithms by Policy Path Trimming and Boosting
The Ladder in Chaos: A Simple and Effective Improvement to General DRL Algorithms by Policy Path Trimming and Boosting
Hongyao Tang
M. Zhang
Jianye Hao
23
1
0
02 Mar 2023
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
Ghada Sokar
Rishabh Agarwal
P. S. Castro
Utku Evci
CLL
40
88
0
24 Feb 2023
Which Experiences Are Influential for Your Agent? Policy Iteration with Turn-over Dropout
Takuya Hiraoka
Takashi Onishi
Yoshimasa Tsuruoka
OffRL
21
0
0
26 Jan 2023
A Domain-Agnostic Approach for Characterization of Lifelong Learning
  Systems
A Domain-Agnostic Approach for Characterization of Lifelong Learning Systems
Megan M. Baker
Alexander New
Mario Aguilar-Simon
Ziad Al-Halah
Sébastien M. R. Arnold
...
Zifan Xu
A. Yanguas-Gil
Harel Yedidsion
Shangqun Yu
Gautam K. Vallabha
27
15
0
18 Jan 2023
Human-Timescale Adaptation in an Open-Ended Task Space
Human-Timescale Adaptation in an Open-Ended Task Space
Adaptive Agent Team
Jakob Bauer
Kate Baumli
Satinder Baveja
Feryal M. P. Behbahani
...
Jakub Sygnowski
K. Tuyls
Sarah York
Alexander Zacherl
Lei Zhang
LM&Ro
OffRL
AI4CE
LRM
35
108
0
18 Jan 2023
SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended
  Exploration
SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration
Giulia Vezzani
Dhruva Tirumala
Markus Wulfmeier
Dushyant Rao
A. Abdolmaleki
...
Tim Hertweck
Thomas Lampe
Fereshteh Sadeghi
N. Heess
Martin Riedmiller
OffRL
28
6
0
24 Nov 2022
Adversarial Cheap Talk
Adversarial Cheap Talk
Chris Xiaoxuan Lu
Timon Willi
Alistair Letcher
Jakob N. Foerster
AAML
16
17
0
20 Nov 2022
MineDojo: Building Open-Ended Embodied Agents with Internet-Scale
  Knowledge
MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
Linxi Fan
Guanzhi Wang
Yunfan Jiang
Ajay Mandlekar
Yuncong Yang
Haoyi Zhu
Andrew Tang
De-An Huang
Yuke Zhu
Anima Anandkumar
LM&Ro
42
348
0
17 Jun 2022
Is High Variance Unavoidable in RL? A Case Study in Continuous Control
Is High Variance Unavoidable in RL? A Case Study in Continuous Control
Johan Bjorck
Carla P. Gomes
Kilian Q. Weinberger
57
23
0
21 Oct 2021
1