ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.08539
  4. Cited By
When, where, and how to add new neurons to ANNs
v1v2 (latest)

When, where, and how to add new neurons to ANNs

17 February 2022
Kaitlin Maile
Emmanuel Rachelson
H. Luga
Dennis G. Wilson
ArXiv (abs)PDFHTML

Papers citing "When, where, and how to add new neurons to ANNs"

10 / 10 papers shown
Title
Growth strategies for arbitrary DAG neural architectures
Growth strategies for arbitrary DAG neural architectures
Stella Douka
Manon Verbockhaven
Théo Rudkiewicz
Stéphane Rivaud
François P. Landes
Sylvain Chevallier
Guillaume Charpiat
AI4CE
73
0
0
17 Feb 2025
Understanding and Preventing Capacity Loss in Reinforcement Learning
Understanding and Preventing Capacity Loss in Reinforcement Learning
Clare Lyle
Mark Rowland
Will Dabney
CLL
81
112
0
20 Apr 2022
Understanding Dimensional Collapse in Contrastive Self-supervised
  Learning
Understanding Dimensional Collapse in Contrastive Self-supervised Learning
Li Jing
Pascal Vincent
Yann LeCun
Yuandong Tian
SSL
120
350
0
18 Oct 2021
Batch Normalization Orthogonalizes Representations in Deep Random
  Networks
Batch Normalization Orthogonalizes Representations in Deep Random Networks
Hadi Daneshmand
Amir Joudaki
Francis R. Bach
OOD
62
37
0
07 Jun 2021
Neural Architecture Search without Training
Neural Architecture Search without Training
J. Mellor
Jack Turner
Amos Storkey
Elliot J. Crowley
63
384
0
08 Jun 2020
Quantifying the Carbon Emissions of Machine Learning
Quantifying the Carbon Emissions of Machine Learning
Alexandre Lacoste
A. Luccioni
Victor Schmidt
Thomas Dandres
97
707
0
21 Oct 2019
Splitting Steepest Descent for Growing Neural Architectures
Splitting Steepest Descent for Growing Neural Architectures
Qiang Liu
Lemeng Wu
Dilin Wang
69
61
0
06 Oct 2019
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
193
3,697
0
31 Aug 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
349
7,985
0
23 May 2016
Exact solutions to the nonlinear dynamics of learning in deep linear
  neural networks
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
Andrew M. Saxe
James L. McClelland
Surya Ganguli
ODL
178
1,849
0
20 Dec 2013
1