ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.02697
  4. Cited By
Deep Equilibrium Models are Almost Equivalent to Not-so-deep Explicit
  Models for High-dimensional Gaussian Mixtures

Deep Equilibrium Models are Almost Equivalent to Not-so-deep Explicit Models for High-dimensional Gaussian Mixtures

5 February 2024
Zenan Ling
Longbo Li
Zhanbo Feng
Yixuan Zhang
Feng Zhou
Robert C. Qiu
Zhenyu Liao
ArXivPDFHTML

Papers citing "Deep Equilibrium Models are Almost Equivalent to Not-so-deep Explicit Models for High-dimensional Gaussian Mixtures"

7 / 7 papers shown
Title
DDEQs: Distributional Deep Equilibrium Models through Wasserstein Gradient Flows
DDEQs: Distributional Deep Equilibrium Models through Wasserstein Gradient Flows
Jonathan Geuter
Clément Bonet
Anna Korba
David Alvarez-Melis
61
0
0
03 Mar 2025
Understanding Representation of Deep Equilibrium Models from Neural
  Collapse Perspective
Understanding Representation of Deep Equilibrium Models from Neural Collapse Perspective
Haixiang Sun
Ye Shi
42
0
0
30 Oct 2024
IGNN-Solver: A Graph Neural Solver for Implicit Graph Neural Networks
IGNN-Solver: A Graph Neural Solver for Implicit Graph Neural Networks
Junchao Lin
Zenan Ling
Zhanbo Feng
Feng Zhou
Jingwen Xu
Feng Zhou
Tianqi Hou
Zhenyu Liao
Robert C. Qiu
GNN
AI4CE
54
0
0
11 Oct 2024
On Dissipativity of Cross-Entropy Loss in Training ResNets
On Dissipativity of Cross-Entropy Loss in Training ResNets
Jens Püttschneider
T. Faulwasser
35
0
0
29 May 2024
"Lossless" Compression of Deep Neural Networks: A High-dimensional
  Neural Tangent Kernel Approach
"Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach
Lingyu Gu
Yongqiang Du
Yuan Zhang
Di Xie
Shiliang Pu
Robert C. Qiu
Zhenyu Liao
44
6
0
01 Mar 2024
Gradient Descent Optimizes Infinite-Depth ReLU Implicit Networks with
  Linear Widths
Gradient Descent Optimizes Infinite-Depth ReLU Implicit Networks with Linear Widths
Tianxiang Gao
Hongyang Gao
MLT
35
5
0
16 May 2022
Random matrices in service of ML footprint: ternary random features with
  no performance loss
Random matrices in service of ML footprint: ternary random features with no performance loss
Hafiz Tiomoko Ali
Zhenyu Liao
Romain Couillet
44
7
0
05 Oct 2021
1