ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.03611
  4. Cited By
The Benefits of Over-parameterization at Initialization in Deep ReLU
  Networks

The Benefits of Over-parameterization at Initialization in Deep ReLU Networks

11 January 2019
Devansh Arpit
Yoshua Bengio
ArXivPDFHTML

Papers citing "The Benefits of Over-parameterization at Initialization in Deep ReLU Networks"

6 / 6 papers shown
Title
Compressible Dynamics in Deep Overparameterized Low-Rank Learning &
  Adaptation
Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation
Can Yaras
Peng Wang
Laura Balzano
Qing Qu
AI4CE
37
13
0
06 Jun 2024
Randomly Initialized One-Layer Neural Networks Make Data Linearly
  Separable
Randomly Initialized One-Layer Neural Networks Make Data Linearly Separable
Promit Ghosal
Srinath Mahankali
Yihang Sun
MLT
29
4
0
24 May 2022
Activation Functions in Deep Learning: A Comprehensive Survey and
  Benchmark
Activation Functions in Deep Learning: A Comprehensive Survey and Benchmark
S. Dubey
S. Singh
B. B. Chaudhuri
41
643
0
29 Sep 2021
BR-NS: an Archive-less Approach to Novelty Search
BR-NS: an Archive-less Approach to Novelty Search
Achkan Salehi
Alexandre Coninx
Stéphane Doncieux
28
6
0
08 Apr 2021
A Comprehensive and Modularized Statistical Framework for Gradient Norm
  Equality in Deep Neural Networks
A Comprehensive and Modularized Statistical Framework for Gradient Norm Equality in Deep Neural Networks
Zhaodong Chen
Lei Deng
Bangyan Wang
Guoqi Li
Yuan Xie
35
28
0
01 Jan 2020
Deep Learning for CSI Feedback Based on Superimposed Coding
Deep Learning for CSI Feedback Based on Superimposed Coding
Chaojin Qing
Bin Cai
Qingyao Yang
Jiafan Wang
Chuan Huang
13
42
0
27 Jul 2019
1