ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.15244
  4. Cited By
Homotopy Relaxation Training Algorithms for Infinite-Width Two-Layer
  ReLU Neural Networks

Homotopy Relaxation Training Algorithms for Infinite-Width Two-Layer ReLU Neural Networks

26 September 2023
Yahong Yang
Qipin Chen
Wenrui Hao
ArXivPDFHTML

Papers citing "Homotopy Relaxation Training Algorithms for Infinite-Width Two-Layer ReLU Neural Networks"

11 / 11 papers shown
Title
Side Effects of Learning from Low-dimensional Data Embedded in a
  Euclidean Space
Side Effects of Learning from Low-dimensional Data Embedded in a Euclidean Space
Juncai He
R. Tsai
Rachel A. Ward
61
9
0
01 Mar 2022
Deep Kronecker neural networks: A general framework for neural networks
  with adaptive activation functions
Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions
Ameya Dilip Jagtap
Yeonjong Shin
Kenji Kawaguchi
George Karniadakis
ODL
72
134
0
20 May 2021
ALReLU: A different approach on Leaky ReLU activation function to
  improve Neural Networks Performance
ALReLU: A different approach on Leaky ReLU activation function to improve Neural Networks Performance
S. Mastromichalakis
32
42
0
11 Dec 2020
Stochastic Gradient Descent with Nonlinear Conjugate Gradient-Style
  Adaptive Momentum
Stochastic Gradient Descent with Nonlinear Conjugate Gradient-Style Adaptive Momentum
Bao Wang
Qiang Ye
ODL
72
14
0
03 Dec 2020
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Yang You
Jing Li
Sashank J. Reddi
Jonathan Hseu
Sanjiv Kumar
Srinadh Bhojanapalli
Xiaodan Song
J. Demmel
Kurt Keutzer
Cho-Jui Hsieh
ODL
230
996
0
01 Apr 2019
Generalization Error Bounds of Gradient Descent for Learning
  Over-parameterized Deep ReLU Networks
Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks
Yuan Cao
Quanquan Gu
ODL
MLT
AI4CE
76
156
0
04 Feb 2019
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
214
1,272
0
04 Oct 2018
The Deep Ritz method: A deep learning-based numerical algorithm for
  solving variational problems
The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems
E. Weinan
Ting Yu
119
1,387
0
30 Sep 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
421
2,937
0
15 Sep 2016
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
323
18,613
0
06 Feb 2015
Learning Activation Functions to Improve Deep Neural Networks
Learning Activation Functions to Improve Deep Neural Networks
Forest Agostinelli
Matthew Hoffman
Peter Sadowski
Pierre Baldi
ODL
215
475
0
21 Dec 2014
1