Papers
Communities
Organizations
Events
Blog
Pricing
Feedback
Contact Sales
Search
Open menu
Home
Papers
1910.00359
Cited By
v1
v2
v3 (latest)
Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory
1 October 2019
Micah Goldblum
Jonas Geiping
Avi Schwarzschild
Michael Moeller
Tom Goldstein
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory"
20 / 20 papers shown
Title
Just How Flexible are Neural Networks in Practice?
Ravid Shwartz-Ziv
Micah Goldblum
Arpit Bansal
C. Bayan Bruss
Yann LeCun
Andrew Gordon Wilson
124
5
0
17 Jun 2024
Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape
Kedar Karhadkar
Michael Murray
Hanna Tseran
Guido Montúfar
101
9
0
31 May 2023
Winning the Lottery Ahead of Time: Efficient Early Network Pruning
John Rachwan
Daniel Zügner
Bertrand Charpentier
Simon Geisler
Morgane Ayle
Stephan Günnemann
116
27
0
21 Jun 2022
Understanding Deep Learning via Decision Boundary
Shiye Lei
Fengxiang He
Yancheng Yuan
Dacheng Tao
106
16
0
03 Jun 2022
Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training?
J. Mok
Byunggook Na
Ji-Hoon Kim
Dongyoon Han
Sungroh Yoon
AAML
138
27
0
28 Mar 2022
On the Omnipresence of Spurious Local Minima in Certain Neural Network Training Problems
C. Christof
Julia Kowalczyk
120
9
0
23 Feb 2022
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Liam H. Fowl
Jonas Geiping
W. Czaja
Micah Goldblum
Tom Goldstein
FedML
165
157
0
25 Oct 2021
An Unconstrained Layer-Peeled Perspective on Neural Collapse
Wenlong Ji
Yiping Lu
Yiliang Zhang
Zhun Deng
Weijie J. Su
296
89
0
06 Oct 2021
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
231
78
0
29 Sep 2021
A linearized framework and a new benchmark for model selection for fine-tuning
Aditya Deshpande
Alessandro Achille
Avinash Ravichandran
Hao Li
Luca Zancato
Charless C. Fowlkes
Rahul Bhotika
Stefano Soatto
Pietro Perona
ALM
207
51
0
29 Jan 2021
Recent advances in deep learning theory
Fengxiang He
Dacheng Tao
AI4CE
165
53
0
20 Dec 2020
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent
William Merrill
Vivek Ramanujan
Yoav Goldberg
Roy Schwartz
Noah A. Smith
AI4CE
154
38
0
19 Oct 2020
Pareto Probing: Trading Off Accuracy for Complexity
Tiago Pimentel
Naomi Saphra
Adina Williams
Robert Bamler
133
64
0
05 Oct 2020
Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings
John Mitros
A. Pakrashi
Brian Mac Namee
UQCV
144
2
0
03 Sep 2020
Predicting Training Time Without Training
Luca Zancato
Alessandro Achille
Avinash Ravichandran
Rahul Bhotika
Stefano Soatto
174
24
0
28 Aug 2020
Finite Versus Infinite Neural Networks: an Empirical Study
Jaehoon Lee
S. Schoenholz
Jeffrey Pennington
Ben Adlam
Lechao Xiao
Roman Novak
Jascha Narain Sohl-Dickstein
164
219
0
31 Jul 2020
Inverting Gradients -- How easy is it to break privacy in federated learning?
Jonas Geiping
Hartmut Bauermeister
Hannah Dröge
Michael Moeller
FedML
251
1,336
0
31 Mar 2020
Piecewise linear activations substantially shape the loss surfaces of neural networks
Fengxiang He
Bohan Wang
Dacheng Tao
ODL
122
31
0
27 Mar 2020
Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Micah Goldblum
Steven Reich
Liam H. Fowl
Renkun Ni
Valeriia Cherepanova
Tom Goldstein
SSL
OffRL
170
76
0
17 Feb 2020
Four Things Everyone Should Know to Improve Batch Normalization
Cecilia Summers
M. Dinneen
103
54
0
09 Jun 2019
1