ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.13363
  4. Cited By
Extreme Memorization via Scale of Initialization

Extreme Memorization via Scale of Initialization

31 August 2020
Harsh Mehta
Ashok Cutkosky
Behnam Neyshabur
ArXivPDFHTML

Papers citing "Extreme Memorization via Scale of Initialization"

9 / 9 papers shown
Title
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Taesun Yeom
Sangyoon Lee
Jaeho Lee
61
2
0
07 Oct 2024
Elephant Neural Networks: Born to Be a Continual Learner
Elephant Neural Networks: Born to Be a Continual Learner
Qingfeng Lan
A. Rupam Mahmood
CLL
56
9
0
02 Oct 2023
On the Lipschitz Constant of Deep Networks and Double Descent
On the Lipschitz Constant of Deep Networks and Double Descent
Matteo Gamba
Hossein Azizpour
Mårten Björkman
33
7
0
28 Jan 2023
ATLAS: Universal Function Approximator for Memory Retention
ATLAS: Universal Function Approximator for Memory Retention
H. V. Deventer
Anna Sergeevna Bosman
35
0
0
10 Aug 2022
Investigating Generalization by Controlling Normalized Margin
Investigating Generalization by Controlling Normalized Margin
Alexander R. Farhang
Jeremy Bernstein
Kushal Tirumala
Yang Liu
Yisong Yue
33
6
0
08 May 2022
Neural Fields in Visual Computing and Beyond
Neural Fields in Visual Computing and Beyond
Yiheng Xie
Towaki Takikawa
Shunsuke Saito
Or Litany
Shiqin Yan
Numair Khan
Federico Tombari
James Tompkin
Vincent Sitzmann
Srinath Sridhar
3DH
85
617
0
22 Nov 2021
Stochastic Training is Not Necessary for Generalization
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
91
72
0
29 Sep 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
312
2,896
0
15 Sep 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
127
577
0
27 Feb 2015
1