Papers
Communities
Organizations
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.00359
Cited By
v1
v2
v3 (latest)
Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory
1 October 2019
Micah Goldblum
Jonas Geiping
Avi Schwarzschild
Michael Moeller
Tom Goldstein
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory"
20 / 20 papers shown
Title
Just How Flexible are Neural Networks in Practice?
Ravid Shwartz-Ziv
Micah Goldblum
Arpit Bansal
C. Bayan Bruss
Yann LeCun
Andrew Gordon Wilson
106
5
0
17 Jun 2024
Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape
Kedar Karhadkar
Michael Murray
Hanna Tseran
Guido Montúfar
73
8
0
31 May 2023
Winning the Lottery Ahead of Time: Efficient Early Network Pruning
John Rachwan
Daniel Zügner
Bertrand Charpentier
Simon Geisler
Morgane Ayle
Stephan Günnemann
86
26
0
21 Jun 2022
Understanding Deep Learning via Decision Boundary
Shiye Lei
Fengxiang He
Yancheng Yuan
Dacheng Tao
76
14
0
03 Jun 2022
Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training?
J. Mok
Byunggook Na
Ji-Hoon Kim
Dongyoon Han
Sungroh Yoon
AAML
103
25
0
28 Mar 2022
On the Omnipresence of Spurious Local Minima in Certain Neural Network Training Problems
C. Christof
Julia Kowalczyk
92
8
0
23 Feb 2022
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Liam H. Fowl
Jonas Geiping
W. Czaja
Micah Goldblum
Tom Goldstein
FedML
149
149
0
25 Oct 2021
An Unconstrained Layer-Peeled Perspective on Neural Collapse
Wenlong Ji
Yiping Lu
Yiliang Zhang
Zhun Deng
Weijie J. Su
238
87
0
06 Oct 2021
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
188
76
0
29 Sep 2021
A linearized framework and a new benchmark for model selection for fine-tuning
Aditya Deshpande
Alessandro Achille
Avinash Ravichandran
Hao Li
Luca Zancato
Charless C. Fowlkes
Rahul Bhotika
Stefano Soatto
Pietro Perona
ALM
191
49
0
29 Jan 2021
Recent advances in deep learning theory
Fengxiang He
Dacheng Tao
AI4CE
138
51
0
20 Dec 2020
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent
William Merrill
Vivek Ramanujan
Yoav Goldberg
Roy Schwartz
Noah A. Smith
AI4CE
103
36
0
19 Oct 2020
Pareto Probing: Trading Off Accuracy for Complexity
Tiago Pimentel
Naomi Saphra
Adina Williams
Ryan Cotterell
101
60
0
05 Oct 2020
Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings
John Mitros
A. Pakrashi
Brian Mac Namee
UQCV
114
2
0
03 Sep 2020
Predicting Training Time Without Training
Luca Zancato
Alessandro Achille
Avinash Ravichandran
Rahul Bhotika
Stefano Soatto
156
24
0
28 Aug 2020
Finite Versus Infinite Neural Networks: an Empirical Study
Jaehoon Lee
S. Schoenholz
Jeffrey Pennington
Ben Adlam
Lechao Xiao
Roman Novak
Jascha Narain Sohl-Dickstein
118
214
0
31 Jul 2020
Inverting Gradients -- How easy is it to break privacy in federated learning?
Jonas Geiping
Hartmut Bauermeister
Hannah Dröge
Michael Moeller
FedML
174
1,241
0
31 Mar 2020
Piecewise linear activations substantially shape the loss surfaces of neural networks
Fengxiang He
Bohan Wang
Dacheng Tao
ODL
100
30
0
27 Mar 2020
Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Micah Goldblum
Steven Reich
Liam H. Fowl
Renkun Ni
Valeriia Cherepanova
Tom Goldstein
SSL
OffRL
120
75
0
17 Feb 2020
Four Things Everyone Should Know to Improve Batch Normalization
Cecilia Summers
M. Dinneen
85
52
0
09 Jun 2019
1