ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.03131
  4. Cited By
Diverse Neural Network Learns True Target Functions

Diverse Neural Network Learns True Target Functions

9 November 2016
Bo Xie
Yingyu Liang
Le Song
ArXivPDFHTML

Papers citing "Diverse Neural Network Learns True Target Functions"

40 / 40 papers shown
Title
WLD-Reg: A Data-dependent Within-layer Diversity Regularizer
WLD-Reg: A Data-dependent Within-layer Diversity Regularizer
Firas Laakom
Jenni Raitoharju
Alexandros Iosifidis
Moncef Gabbouj
AI4CE
29
7
0
03 Jan 2023
Dynamic Sparse Training via Balancing the Exploration-Exploitation
  Trade-off
Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off
Shaoyi Huang
Bowen Lei
Dongkuan Xu
Hongwu Peng
Yue Sun
Mimi Xie
Caiwen Ding
29
19
0
30 Nov 2022
Characterizing the Spectrum of the NTK via a Power Series Expansion
Characterizing the Spectrum of the NTK via a Power Series Expansion
Michael Murray
Hui Jin
Benjamin Bowman
Guido Montúfar
40
11
0
15 Nov 2022
When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work
When Expressivity Meets Trainability: Fewer than nnn Neurons Can Work
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Ruoyu Sun
Zhi-Quan Luo
31
10
0
21 Oct 2022
Learning Diversified Feature Representations for Facial Expression
  Recognition in the Wild
Learning Diversified Feature Representations for Facial Expression Recognition in the Wild
Negar Heidari
Alexandros Iosifidis
CVBM
37
3
0
17 Oct 2022
The Principle of Diversity: Training Stronger Vision Transformers Calls
  for Reducing All Levels of Redundancy
The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Tianlong Chen
Zhenyu Zhang
Yu Cheng
Ahmed Hassan Awadallah
Zhangyang Wang
ViT
41
37
0
12 Mar 2022
Benefit of Interpolation in Nearest Neighbor Algorithms
Benefit of Interpolation in Nearest Neighbor Algorithms
Yue Xing
Qifan Song
Guang Cheng
17
28
0
23 Feb 2022
Improved Overparametrization Bounds for Global Convergence of Stochastic
  Gradient Descent for Shallow Neural Networks
Improved Overparametrization Bounds for Global Convergence of Stochastic Gradient Descent for Shallow Neural Networks
Bartlomiej Polaczyk
J. Cyranka
ODL
35
3
0
28 Jan 2022
Overview frequency principle/spectral bias in deep learning
Overview frequency principle/spectral bias in deep learning
Z. Xu
Tao Luo
Yaoyu Zhang
FaML
35
66
0
19 Jan 2022
Rethinking Influence Functions of Neural Networks in the
  Over-parameterized Regime
Rethinking Influence Functions of Neural Networks in the Over-parameterized Regime
Rui Zhang
Shihua Zhang
TDI
29
21
0
15 Dec 2021
Fast Sketching of Polynomial Kernels of Polynomial Degree
Fast Sketching of Polynomial Kernels of Polynomial Degree
Zhao Song
David P. Woodruff
Zheng Yu
Lichen Zhang
21
40
0
21 Aug 2021
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks
  Trained by Gradient Descent
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent
Spencer Frei
Quanquan Gu
26
26
0
25 Jun 2021
Learning distinct features helps, provably
Learning distinct features helps, provably
Firas Laakom
Jenni Raitoharju
Alexandros Iosifidis
Moncef Gabbouj
MLT
36
6
0
10 Jun 2021
Learning with Hyperspherical Uniformity
Learning with Hyperspherical Uniformity
Weiyang Liu
Rongmei Lin
Zhen Liu
Li Xiong
Bernhard Schölkopf
Adrian Weller
37
35
0
02 Mar 2021
Towards Understanding Ensemble, Knowledge Distillation and
  Self-Distillation in Deep Learning
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
62
357
0
17 Dec 2020
Deep Equals Shallow for ReLU Networks in Kernel Regimes
Deep Equals Shallow for ReLU Networks in Kernel Regimes
A. Bietti
Francis R. Bach
32
85
0
30 Sep 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
36
32
0
18 Jun 2020
Feature Purification: How Adversarial Training Performs Robust Deep
  Learning
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
39
147
0
20 May 2020
Orthogonal Over-Parameterized Training
Orthogonal Over-Parameterized Training
Weiyang Liu
Rongmei Lin
Zhen Liu
James M. Rehg
Liam Paull
Li Xiong
Le Song
Adrian Weller
32
41
0
09 Apr 2020
Symmetry & critical points for a model shallow neural network
Symmetry & critical points for a model shallow neural network
Yossi Arjevani
M. Field
36
13
0
23 Mar 2020
Frequency Bias in Neural Networks for Input of Non-Uniform Density
Frequency Bias in Neural Networks for Input of Non-Uniform Density
Ronen Basri
Meirav Galun
Amnon Geifman
David Jacobs
Yoni Kasten
S. Kritchman
45
183
0
10 Mar 2020
Learning Parities with Neural Networks
Learning Parities with Neural Networks
Amit Daniely
Eran Malach
24
76
0
18 Feb 2020
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
8
763
0
26 Jun 2019
Improving Neural Language Modeling via Adversarial Training
Improving Neural Language Modeling via Adversarial Training
Dilin Wang
Chengyue Gong
Qiang Liu
AAML
43
115
0
10 Jun 2019
Fast Convergence of Natural Gradient Descent for Overparameterized
  Neural Networks
Fast Convergence of Natural Gradient Descent for Overparameterized Neural Networks
Guodong Zhang
James Martens
Roger C. Grosse
ODL
22
124
0
27 May 2019
What Can ResNet Learn Efficiently, Going Beyond Kernels?
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
24
183
0
24 May 2019
Fine-grained Optimization of Deep Neural Networks
Fine-grained Optimization of Deep Neural Networks
Mete Ozay
ODL
16
1
0
22 May 2019
Analysis of the Gradient Descent Algorithm for a Deep Neural Network
  Model with Skip-connections
Analysis of the Gradient Descent Algorithm for a Deep Neural Network Model with Skip-connections
E. Weinan
Chao Ma
Qingcan Wang
Lei Wu
MLT
37
22
0
10 Apr 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
47
351
0
27 Mar 2019
Fine-Grained Analysis of Optimization and Generalization for
  Overparameterized Two-Layer Neural Networks
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
55
961
0
24 Jan 2019
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
Henning Petzka
C. Sminchisescu
29
9
0
16 Dec 2018
Learning and Generalization in Overparameterized Neural Networks, Going
  Beyond Two Layers
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
41
765
0
12 Nov 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
65
1,252
0
04 Oct 2018
Learning One-hidden-layer ReLU Networks via Gradient Descent
Learning One-hidden-layer ReLU Networks via Gradient Descent
Xiao Zhang
Yaodong Yu
Lingxiao Wang
Quanquan Gu
MLT
30
134
0
20 Jun 2018
Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex
Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex
Hongyang R. Zhang
Junru Shao
Ruslan Salakhutdinov
39
14
0
06 Jun 2018
Visualizing the Loss Landscape of Neural Nets
Visualizing the Loss Landscape of Neural Nets
Hao Li
Zheng Xu
Gavin Taylor
Christoph Studer
Tom Goldstein
111
1,850
0
28 Dec 2017
Theoretical insights into the optimization landscape of
  over-parameterized shallow neural networks
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
Mahdi Soltanolkotabi
Adel Javanmard
Jason D. Lee
36
415
0
16 Jul 2017
Global optimality conditions for deep neural networks
Global optimality conditions for deep neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
128
118
0
08 Jul 2017
Deep Semi-Random Features for Nonlinear Function Approximation
Deep Semi-Random Features for Nonlinear Function Approximation
Kenji Kawaguchi
Bo Xie
Vikas Verma
Le Song
29
15
0
28 Feb 2017
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
186
1,186
0
30 Nov 2014
1