Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.03611
Cited By
The Benefits of Over-parameterization at Initialization in Deep ReLU Networks
11 January 2019
Devansh Arpit
Yoshua Bengio
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Benefits of Over-parameterization at Initialization in Deep ReLU Networks"
6 / 6 papers shown
Title
Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation
Can Yaras
Peng Wang
Laura Balzano
Qing Qu
AI4CE
37
13
0
06 Jun 2024
Randomly Initialized One-Layer Neural Networks Make Data Linearly Separable
Promit Ghosal
Srinath Mahankali
Yihang Sun
MLT
29
4
0
24 May 2022
Activation Functions in Deep Learning: A Comprehensive Survey and Benchmark
S. Dubey
S. Singh
B. B. Chaudhuri
41
643
0
29 Sep 2021
BR-NS: an Archive-less Approach to Novelty Search
Achkan Salehi
Alexandre Coninx
Stéphane Doncieux
28
6
0
08 Apr 2021
A Comprehensive and Modularized Statistical Framework for Gradient Norm Equality in Deep Neural Networks
Zhaodong Chen
Lei Deng
Bangyan Wang
Guoqi Li
Yuan Xie
35
28
0
01 Jan 2020
Deep Learning for CSI Feedback Based on Superimposed Coding
Chaojin Qing
Bin Cai
Qingyao Yang
Jiafan Wang
Chuan Huang
16
42
0
27 Jul 2019
1