ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.00267
  4. Cited By
The Impact of Reinitialization on Generalization in Convolutional Neural
  Networks

The Impact of Reinitialization on Generalization in Convolutional Neural Networks

1 September 2021
Ibrahim M. Alabdulmohsin
Hartmut Maennel
Daniel Keysers
    AI4CE
ArXivPDFHTML

Papers citing "The Impact of Reinitialization on Generalization in Convolutional Neural Networks"

16 / 16 papers shown
Title
Breaking the Reclustering Barrier in Centroid-based Deep Clustering
Breaking the Reclustering Barrier in Centroid-based Deep Clustering
Lukas Miklautz
Timo Klein
Kevin Sidak
Collin Leiber
Thomas Lang
Andrii Shkabrii
Sebastian Tschiatschek
Claudia Plant
36
1
0
04 Nov 2024
Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective
Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective
Jingren Liu
Zhong Ji
YunLong Yu
Jiale Cao
Yanwei Pang
Jungong Han
Xuelong Li
CLL
42
5
0
24 Jul 2024
Dual Process Learning: Controlling Use of In-Context vs. In-Weights Strategies with Weight Forgetting
Dual Process Learning: Controlling Use of In-Context vs. In-Weights Strategies with Weight Forgetting
Suraj Anand
Michael A. Lepori
Jack Merullo
Ellie Pavlick
CLL
33
6
0
28 May 2024
Conserve-Update-Revise to Cure Generalization and Robustness Trade-off
  in Adversarial Training
Conserve-Update-Revise to Cure Generalization and Robustness Trade-off in Adversarial Training
Shruthi Gowda
Bahram Zonooz
Elahe Arani
AAML
31
2
0
26 Jan 2024
Reset It and Forget It: Relearning Last-Layer Weights Improves Continual
  and Transfer Learning
Reset It and Forget It: Relearning Last-Layer Weights Improves Continual and Transfer Learning
Lapo Frati
Neil Traft
Jeff Clune
Nick Cheney
CLL
27
0
0
12 Oct 2023
Improving Language Plasticity via Pretraining with Active Forgetting
Improving Language Plasticity via Pretraining with Active Forgetting
Yihong Chen
Kelly Marchisio
Roberta Raileanu
David Ifeoluwa Adelani
Pontus Stenetorp
Sebastian Riedel
Mikel Artetx
KELM
AI4CE
CLL
30
24
0
03 Jul 2023
Robust Ante-hoc Graph Explainer using Bilevel Optimization
Robust Ante-hoc Graph Explainer using Bilevel Optimization
Kha-Dinh Luong
Mert Kosan
A. Silva
Ambuj K. Singh
36
6
0
25 May 2023
Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural
  Networks
Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural Networks
V. Ramkumar
Elahe Arani
Bahram Zonooz
MU
OnRL
CLL
39
5
0
18 Mar 2023
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
Ghada Sokar
Rishabh Agarwal
Pablo Samuel Castro
Utku Evci
CLL
51
89
0
24 Feb 2023
Towards Cross Domain Generalization of Hamiltonian Representation via
  Meta Learning
Towards Cross Domain Generalization of Hamiltonian Representation via Meta Learning
Yeongwoo Song
Hawoong Jeong
OOD
AI4CE
24
1
0
02 Dec 2022
When Does Re-initialization Work?
When Does Re-initialization Work?
Sheheryar Zaidi
Tudor Berariu
Hyunjik Kim
J. Bornschein
Claudia Clopath
Yee Whye Teh
Razvan Pascanu
40
10
0
20 Jun 2022
The Primacy Bias in Deep Reinforcement Learning
The Primacy Bias in Deep Reinforcement Learning
Evgenii Nikishin
Max Schwarzer
P. DÓro
Pierre-Luc Bacon
Rameswar Panda
OnRL
96
180
0
16 May 2022
Fortuitous Forgetting in Connectionist Networks
Fortuitous Forgetting in Connectionist Networks
Hattie Zhou
Ankit Vani
Hugo Larochelle
Aaron Courville
CLL
16
42
0
01 Feb 2022
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,572
0
17 Apr 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
127
577
0
27 Feb 2015
1