ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.07254
  4. Cited By
Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics

Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics

14 August 2024
Alireza Mousavi-Hosseini
Denny Wu
Murat A. Erdogdu
    MLT
    AI4CE
ArXivPDFHTML

Papers citing "Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics"

12 / 12 papers shown
Title
Mirror Mean-Field Langevin Dynamics
Mirror Mean-Field Langevin Dynamics
Anming Gu
Juno Kim
31
0
0
05 May 2025
Survey on Algorithms for multi-index models
Survey on Algorithms for multi-index models
Joan Bruna
Daniel Hsu
31
0
0
07 Apr 2025
Learning a Single Index Model from Anisotropic Data with vanilla Stochastic Gradient Descent
Learning a Single Index Model from Anisotropic Data with vanilla Stochastic Gradient Descent
Guillaume Braun
Minh Ha Quang
Masaaki Imaizumi
MLT
39
0
0
31 Mar 2025
When Do Transformers Outperform Feedforward and Recurrent Networks? A Statistical Perspective
Alireza Mousavi-Hosseini
Clayton Sanford
Denny Wu
Murat A. Erdogdu
48
0
0
14 Mar 2025
Propagation of Chaos for Mean-Field Langevin Dynamics and its Application to Model Ensemble
Atsushi Nitanda
Anzelle Lee
Damian Tan Xing Kai
Mizuki Sakaguchi
Taiji Suzuki
AI4CE
61
1
0
09 Feb 2025
Robust Feature Learning for Multi-Index Models in High Dimensions
Robust Feature Learning for Multi-Index Models in High Dimensions
Alireza Mousavi-Hosseini
Adel Javanmard
Murat A. Erdogdu
OOD
AAML
44
1
0
21 Oct 2024
Sampling from the Mean-Field Stationary Distribution
Sampling from the Mean-Field Stationary Distribution
Yunbum Kook
Matthew Shunshi Zhang
Sinho Chewi
Murat A. Erdogdu
Mufan Bill Li
64
7
0
12 Feb 2024
The Benefits of Reusing Batches for Gradient Descent in Two-Layer
  Networks: Breaking the Curse of Information and Leap Exponents
The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents
Yatin Dandi
Emanuele Troiani
Luca Arnaboldi
Luca Pesce
Lenka Zdeborová
Florent Krzakala
MLT
66
26
0
05 Feb 2024
SGD learning on neural networks: leap complexity and saddle-to-saddle
  dynamics
SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
Emmanuel Abbe
Enric Boix-Adserà
Theodor Misiakiewicz
FedML
MLT
79
73
0
21 Feb 2023
Learning Single-Index Models with Shallow Neural Networks
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
164
67
0
27 Oct 2022
Neural Networks Efficiently Learn Low-Dimensional Representations with
  SGD
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
324
48
0
29 Sep 2022
Convex Analysis of the Mean Field Langevin Dynamics
Convex Analysis of the Mean Field Langevin Dynamics
Atsushi Nitanda
Denny Wu
Taiji Suzuki
MLT
61
64
0
25 Jan 2022
1