ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.05797
  4. Cited By
Global Convergence Rate of Deep Equilibrium Models with General Activations
v1v2v3v4 (latest)

Global Convergence Rate of Deep Equilibrium Models with General Activations

11 February 2023
Lan V. Truong
ArXiv (abs)PDFHTML

Papers citing "Global Convergence Rate of Deep Equilibrium Models with General Activations"

22 / 22 papers shown
Title
Deep Equilibrium Models are Almost Equivalent to Not-so-deep Explicit
  Models for High-dimensional Gaussian Mixtures
Deep Equilibrium Models are Almost Equivalent to Not-so-deep Explicit Models for High-dimensional Gaussian Mixtures
Zenan Ling
Longbo Li
Zhanbo Feng
Yixuan Zhang
Feng Zhou
Robert C. Qiu
Zhenyu Liao
75
4
0
05 Feb 2024
Wide Neural Networks as Gaussian Processes: Lessons from Deep
  Equilibrium Models
Wide Neural Networks as Gaussian Processes: Lessons from Deep Equilibrium Models
Tianxiang Gao
Xiaokai Huo
Hailiang Liu
Hongyang Gao
BDL
55
8
0
16 Oct 2023
On the Equivalence between Implicit and Explicit Neural Networks: A
  High-dimensional Viewpoint
On the Equivalence between Implicit and Explicit Neural Networks: A High-dimensional Viewpoint
Zenan Ling
Zhenyu Liao
Robert C. Qiu
95
0
0
31 Aug 2023
On Rademacher Complexity-based Generalization Bounds for Deep Learning
On Rademacher Complexity-based Generalization Bounds for Deep Learning
Lan V. Truong
MLT
88
13
0
08 Aug 2022
Global Convergence of Over-parameterized Deep Equilibrium Models
Global Convergence of Over-parameterized Deep Equilibrium Models
Zenan Ling
Xingyu Xie
Qiuhao Wang
Zongpeng Zhang
Zhouchen Lin
91
12
0
27 May 2022
Gradient Descent Optimizes Infinite-Depth ReLU Implicit Networks with
  Linear Widths
Gradient Descent Optimizes Infinite-Depth ReLU Implicit Networks with Linear Widths
Tianxiang Gao
Hongyang Gao
MLT
77
5
0
16 May 2022
Generalization Error Bounds on Deep Learning with Markov Datasets
Generalization Error Bounds on Deep Learning with Markov Datasets
Lan V. Truong
88
8
0
23 Dec 2021
A global convergence theory for deep ReLU implicit networks via
  over-parameterization
A global convergence theory for deep ReLU implicit networks via over-parameterization
Tianxiang Gao
Hailiang Liu
Jia Liu
Hridesh Rajan
Hongyang Gao
MLT
66
16
0
11 Oct 2021
Deep Equilibrium Architectures for Inverse Problems in Imaging
Deep Equilibrium Architectures for Inverse Problems in Imaging
Davis Gilton
Greg Ongie
Rebecca Willett
75
181
0
16 Feb 2021
On the Proof of Global Convergence of Gradient Descent for Deep ReLU
  Networks with Linear Widths
On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths
Quynh N. Nguyen
104
48
0
24 Jan 2021
Differentiable PAC-Bayes Objectives with Partially Aggregated Neural
  Networks
Differentiable PAC-Bayes Objectives with Partially Aggregated Neural Networks
Felix Biggs
Benjamin Guedj
FedMLUQCVBDL
28
34
0
22 Jun 2020
Multiscale Deep Equilibrium Models
Multiscale Deep Equilibrium Models
Shaojie Bai
V. Koltun
J. Zico Kolter
BDL
85
211
0
15 Jun 2020
Generalization Error Bounds Via Rényi-, $f$-Divergences and Maximal
  Leakage
Generalization Error Bounds Via Rényi-, fff-Divergences and Maximal Leakage
A. Esposito
Michael C. Gastpar
Ibrahim Issa
38
76
0
01 Dec 2019
Deep Equilibrium Models
Deep Equilibrium Models
Shaojie Bai
J. Zico Kolter
V. Koltun
92
667
0
03 Sep 2019
On Exact Computation with an Infinitely Wide Neural Net
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
226
925
0
26 Apr 2019
Gradient Descent Finds Global Minima of Deep Neural Networks
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
Jason D. Lee
Haochuan Li
Liwei Wang
Masayoshi Tomizuka
ODL
203
1,135
0
09 Nov 2018
Learning Overparameterized Neural Networks via Stochastic Gradient
  Descent on Structured Data
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
219
652
0
03 Aug 2018
Generalization Error in Deep Learning
Generalization Error in Deep Learning
Daniel Jakubovitz
Raja Giryes
M. Rodrigues
AI4CE
188
111
0
03 Aug 2018
Information-theoretic analysis of generalization capability of learning
  algorithms
Information-theoretic analysis of generalization capability of learning algorithms
Aolin Xu
Maxim Raginsky
169
446
0
22 May 2017
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural
  Networks with Many More Parameters than Training Data
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data
Gintare Karolina Dziugaite
Daniel M. Roy
106
815
0
31 Mar 2017
Toward Deeper Understanding of Neural Networks: The Power of
  Initialization and a Dual View on Expressivity
Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
Amit Daniely
Roy Frostig
Y. Singer
166
343
0
18 Feb 2016
From average case complexity to improper learning complexity
From average case complexity to improper learning complexity
Amit Daniely
N. Linial
Shai Shalev-Shwartz
115
120
0
10 Nov 2013
1