ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.12001
  4. Cited By
When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work

When Expressivity Meets Trainability: Fewer than nnn Neurons Can Work

21 October 2022
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Ruoyu Sun
Z. Luo
ArXivPDFHTML

Papers citing "When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work"

14 / 14 papers shown
Title
Architecture independent generalization bounds for overparametrized deep ReLU networks
Architecture independent generalization bounds for overparametrized deep ReLU networks
Thomas Chen
Chun-Kai Kevin Chien
Patrícia Muñoz Ewald
Andrew G. Moore
23
0
0
08 Apr 2025
Anytime Neural Architecture Search on Tabular Data
Anytime Neural Architecture Search on Tabular Data
Naili Xing
Shaofeng Cai
Zhaojing Luo
Bengchin Ooi
Jian Pei
34
1
0
15 Mar 2024
Memorization with neural nets: going beyond the worst case
Memorization with neural nets: going beyond the worst case
S. Dirksen
Patrick Finke
Martin Genzel
29
0
0
30 Sep 2023
Memory capacity of two layer neural networks with smooth activations
Memory capacity of two layer neural networks with smooth activations
Liam Madden
Christos Thrampoulidis
MLT
13
5
0
03 Aug 2023
Memorization Capacity of Multi-Head Attention in Transformers
Memorization Capacity of Multi-Head Attention in Transformers
Sadegh Mahdavi
Renjie Liao
Christos Thrampoulidis
24
22
0
03 Jun 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
28
19
0
06 Apr 2023
Neural Collapse with Normalized Features: A Geometric Analysis over the
  Riemannian Manifold
Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold
Can Yaras
Peng Wang
Zhihui Zhu
Laura Balzano
Qing Qu
15
41
0
19 Sep 2022
Why Robust Generalization in Deep Learning is Difficult: Perspective of
  Expressive Power
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power
Binghui Li
Jikai Jin
Han Zhong
J. Hopcroft
Liwei Wang
OOD
74
27
0
27 May 2022
Achieving Small Test Error in Mildly Overparameterized Neural Networks
Achieving Small Test Error in Mildly Overparameterized Neural Networks
Shiyu Liang
Ruoyu Sun
R. Srikant
20
3
0
24 Apr 2021
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer
  Neural Network
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Mo Zhou
Rong Ge
Chi Jin
69
44
0
04 Feb 2021
On the Proof of Global Convergence of Gradient Descent for Deep ReLU
  Networks with Linear Widths
On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths
Quynh N. Nguyen
33
49
0
24 Jan 2021
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
316
1,047
0
10 Feb 2017
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
133
602
0
14 Feb 2016
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
296
39,194
0
01 Sep 2014
1