ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.00042
  4. Cited By
A study on the plasticity of neural networks

A study on the plasticity of neural networks

31 May 2021
Tudor Berariu
Wojciech M. Czarnecki
Soham De
J. Bornschein
Samuel L. Smith
Razvan Pascanu
Claudia Clopath
    CLL
    AI4CE
ArXivPDFHTML

Papers citing "A study on the plasticity of neural networks"

9 / 9 papers shown
Title
Breaking the Reclustering Barrier in Centroid-based Deep Clustering
Breaking the Reclustering Barrier in Centroid-based Deep Clustering
Lukas Miklautz
Timo Klein
Kevin Sidak
Collin Leiber
Thomas Lang
Andrii Shkabrii
Sebastian Tschiatschek
Claudia Plant
34
1
0
04 Nov 2024
Neuroplastic Expansion in Deep Reinforcement Learning
Neuroplastic Expansion in Deep Reinforcement Learning
Jiashun Liu
J. Obando-Ceron
Rameswar Panda
L. Pan
42
3
0
10 Oct 2024
Normalization and effective learning rates in reinforcement learning
Normalization and effective learning rates in reinforcement learning
Clare Lyle
Zeyu Zheng
Khimya Khetarpal
James Martens
H. V. Hasselt
Razvan Pascanu
Will Dabney
19
7
0
01 Jul 2024
Directions of Curvature as an Explanation for Loss of Plasticity
Directions of Curvature as an Explanation for Loss of Plasticity
Alex Lewandowski
Haruto Tanaka
Dale Schuurmans
Marlos C. Machado
19
5
0
30 Nov 2023
Continual Learning: Applications and the Road Forward
Continual Learning: Applications and the Road Forward
Eli Verwimp
Rahaf Aljundi
Shai Ben-David
Matthias Bethge
Andrea Cossu
...
Joost van de Weijer
Bing Liu
Vincenzo Lomonaco
Tinne Tuytelaars
Gido M. van de Ven
CLL
43
44
0
20 Nov 2023
Understanding plasticity in neural networks
Understanding plasticity in neural networks
Clare Lyle
Zeyu Zheng
Evgenii Nikishin
Bernardo Avila-Pires
Razvan Pascanu
Will Dabney
AI4CE
35
97
0
02 Mar 2023
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
Ghada Sokar
Rishabh Agarwal
Pablo Samuel Castro
Utku Evci
CLL
51
88
0
24 Feb 2023
Does Optimal Source Task Performance Imply Optimal Pre-training for a
  Target Task?
Does Optimal Source Task Performance Imply Optimal Pre-training for a Target Task?
Steven Gutstein
Brent Lance
Sanjay Shakkottai
27
1
0
21 Jun 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
1