ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.05012
  4. Cited By
Representing smooth functions as compositions of near-identity functions
  with implications for deep network optimization
v1v2 (latest)

Representing smooth functions as compositions of near-identity functions with implications for deep network optimization

13 April 2018
Peter L. Bartlett
S. Evans
Philip M. Long
ArXiv (abs)PDFHTML

Papers citing "Representing smooth functions as compositions of near-identity functions with implications for deep network optimization"

12 / 12 papers shown
Title
Simple and Principled Uncertainty Estimation with Deterministic Deep
  Learning via Distance Awareness
Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness
Jeremiah Zhe Liu
Zi Lin
Shreyas Padhy
Dustin Tran
Tania Bedrax-Weiss
Balaji Lakshminarayanan
UQCVBDL
171
451
0
17 Jun 2020
Gradient descent with identity initialization efficiently learns
  positive definite linear transformations by deep residual networks
Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks
Peter L. Bartlett
D. Helmbold
Philip M. Long
84
116
0
16 Feb 2018
The Reversible Residual Network: Backpropagation Without Storing
  Activations
The Reversible Residual Network: Backpropagation Without Storing Activations
Aidan Gomez
Mengye Ren
R. Urtasun
Roger C. Grosse
74
550
0
14 Jul 2017
Identity Matters in Deep Learning
Identity Matters in Deep Learning
Moritz Hardt
Tengyu Ma
OOD
87
398
0
14 Nov 2016
Exponential expressivity in deep neural networks through transient chaos
Exponential expressivity in deep neural networks through transient chaos
Ben Poole
Subhaneil Lahiri
M. Raghu
Jascha Narain Sohl-Dickstein
Surya Ganguli
90
592
0
16 Jun 2016
Faster Eigenvector Computation via Shift-and-Invert Preconditioning
Faster Eigenvector Computation via Shift-and-Invert Preconditioning
Dan Garber
Laurent Dinh
Chi Jin
Jascha Narain Sohl-Dickstein
Samy Bengio
Praneeth Netrapalli
Aaron Sidford
272
3,702
0
26 May 2016
Learning Functions: When Is Deep Better Than Shallow
Learning Functions: When Is Deep Better Than Shallow
H. Mhaskar
Q. Liao
T. Poggio
69
144
0
03 Mar 2016
Representation Benefits of Deep Feedforward Networks
Representation Benefits of Deep Feedforward Networks
Matus Telgarsky
76
242
0
27 Sep 2015
Gradient-based Hyperparameter Optimization through Reversible Learning
Gradient-based Hyperparameter Optimization through Reversible Learning
D. Maclaurin
David Duvenaud
Ryan P. Adams
DD
227
945
0
11 Feb 2015
Breaking the Curse of Dimensionality with Convex Neural Networks
Breaking the Curse of Dimensionality with Convex Neural Networks
Francis R. Bach
184
706
0
30 Dec 2014
On the Number of Linear Regions of Deep Neural Networks
On the Number of Linear Regions of Deep Neural Networks
Guido Montúfar
Razvan Pascanu
Kyunghyun Cho
Yoshua Bengio
88
1,254
0
08 Feb 2014
Predicting Parameters in Deep Learning
Predicting Parameters in Deep Learning
Misha Denil
B. Shakibi
Laurent Dinh
MarcÁurelio Ranzato
Nando de Freitas
OOD
200
1,319
0
03 Jun 2013
1