Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.07463
Cited By
Gradient Descent Optimizes Infinite-Depth ReLU Implicit Networks with Linear Widths
16 May 2022
Tianxiang Gao
Hongyang Gao
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Gradient Descent Optimizes Infinite-Depth ReLU Implicit Networks with Linear Widths"
9 / 9 papers shown
Title
Global Convergence Rate of Deep Equilibrium Models with General Activations
Lan V. Truong
67
2
0
11 Feb 2023
Stabilizing Equilibrium Models by Jacobian Regularization
Shaojie Bai
V. Koltun
J. Zico Kolter
57
57
0
28 Jun 2021
Multiscale Deep Equilibrium Models
Shaojie Bai
V. Koltun
J. Zico Kolter
BDL
80
211
0
15 Jun 2020
Monotone operator equilibrium networks
Ezra Winston
J. Zico Kolter
51
130
0
15 Jun 2020
Implicit Deep Learning
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
54
178
0
17 Aug 2019
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
Difan Zou
Yuan Cao
Dongruo Zhou
Quanquan Gu
ODL
156
448
0
21 Nov 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
192
1,270
0
04 Oct 2018
Neural Ordinary Differential Equations
T. Chen
Yulia Rubanova
J. Bettencourt
David Duvenaud
AI4CE
334
5,081
0
19 Jun 2018
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
276
18,587
0
06 Feb 2015
1