ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.00359
  4. Cited By
Truth or Backpropaganda? An Empirical Investigation of Deep Learning
  Theory
v1v2v3 (latest)

Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory

1 October 2019
Micah Goldblum
Jonas Geiping
Avi Schwarzschild
Michael Moeller
Tom Goldstein
ArXiv (abs)PDFHTML

Papers citing "Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory"

20 / 20 papers shown
Title
Just How Flexible are Neural Networks in Practice?
Just How Flexible are Neural Networks in Practice?
Ravid Shwartz-Ziv
Micah Goldblum
Arpit Bansal
C. Bayan Bruss
Yann LeCun
Andrew Gordon Wilson
106
5
0
17 Jun 2024
Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape
Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape
Kedar Karhadkar
Michael Murray
Hanna Tseran
Guido Montúfar
73
8
0
31 May 2023
Winning the Lottery Ahead of Time: Efficient Early Network Pruning
Winning the Lottery Ahead of Time: Efficient Early Network Pruning
John Rachwan
Daniel Zügner
Bertrand Charpentier
Simon Geisler
Morgane Ayle
Stephan Günnemann
86
26
0
21 Jun 2022
Understanding Deep Learning via Decision Boundary
Understanding Deep Learning via Decision Boundary
Shiye Lei
Fengxiang He
Yancheng Yuan
Dacheng Tao
76
14
0
03 Jun 2022
Demystifying the Neural Tangent Kernel from a Practical Perspective: Can
  it be trusted for Neural Architecture Search without training?
Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training?
J. Mok
Byunggook Na
Ji-Hoon Kim
Dongyoon Han
Sungroh Yoon
AAML
103
25
0
28 Mar 2022
On the Omnipresence of Spurious Local Minima in Certain Neural Network
  Training Problems
On the Omnipresence of Spurious Local Minima in Certain Neural Network Training Problems
C. Christof
Julia Kowalczyk
92
8
0
23 Feb 2022
Robbing the Fed: Directly Obtaining Private Data in Federated Learning
  with Modified Models
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Liam H. Fowl
Jonas Geiping
W. Czaja
Micah Goldblum
Tom Goldstein
FedML
149
149
0
25 Oct 2021
An Unconstrained Layer-Peeled Perspective on Neural Collapse
An Unconstrained Layer-Peeled Perspective on Neural Collapse
Wenlong Ji
Yiping Lu
Yiliang Zhang
Zhun Deng
Weijie J. Su
238
87
0
06 Oct 2021
Stochastic Training is Not Necessary for Generalization
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
188
76
0
29 Sep 2021
A linearized framework and a new benchmark for model selection for
  fine-tuning
A linearized framework and a new benchmark for model selection for fine-tuning
Aditya Deshpande
Alessandro Achille
Avinash Ravichandran
Hao Li
Luca Zancato
Charless C. Fowlkes
Rahul Bhotika
Stefano Soatto
Pietro Perona
ALM
191
49
0
29 Jan 2021
Recent advances in deep learning theory
Recent advances in deep learning theory
Fengxiang He
Dacheng Tao
AI4CE
138
51
0
20 Dec 2020
Effects of Parameter Norm Growth During Transformer Training: Inductive
  Bias from Gradient Descent
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent
William Merrill
Vivek Ramanujan
Yoav Goldberg
Roy Schwartz
Noah A. Smith
AI4CE
103
36
0
19 Oct 2020
Pareto Probing: Trading Off Accuracy for Complexity
Pareto Probing: Trading Off Accuracy for Complexity
Tiago Pimentel
Naomi Saphra
Adina Williams
Ryan Cotterell
101
60
0
05 Oct 2020
Ramifications of Approximate Posterior Inference for Bayesian Deep
  Learning in Adversarial and Out-of-Distribution Settings
Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings
John Mitros
A. Pakrashi
Brian Mac Namee
UQCV
114
2
0
03 Sep 2020
Predicting Training Time Without Training
Predicting Training Time Without Training
Luca Zancato
Alessandro Achille
Avinash Ravichandran
Rahul Bhotika
Stefano Soatto
156
24
0
28 Aug 2020
Finite Versus Infinite Neural Networks: an Empirical Study
Finite Versus Infinite Neural Networks: an Empirical Study
Jaehoon Lee
S. Schoenholz
Jeffrey Pennington
Ben Adlam
Lechao Xiao
Roman Novak
Jascha Narain Sohl-Dickstein
118
214
0
31 Jul 2020
Inverting Gradients -- How easy is it to break privacy in federated
  learning?
Inverting Gradients -- How easy is it to break privacy in federated learning?
Jonas Geiping
Hartmut Bauermeister
Hannah Dröge
Michael Moeller
FedML
174
1,241
0
31 Mar 2020
Piecewise linear activations substantially shape the loss surfaces of
  neural networks
Piecewise linear activations substantially shape the loss surfaces of neural networks
Fengxiang He
Bohan Wang
Dacheng Tao
ODL
100
30
0
27 Mar 2020
Unraveling Meta-Learning: Understanding Feature Representations for
  Few-Shot Tasks
Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Micah Goldblum
Steven Reich
Liam H. Fowl
Renkun Ni
Valeriia Cherepanova
Tom Goldstein
SSLOffRL
120
75
0
17 Feb 2020
Four Things Everyone Should Know to Improve Batch Normalization
Four Things Everyone Should Know to Improve Batch Normalization
Cecilia Summers
M. Dinneen
85
52
0
09 Jun 2019
1