ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.15995
  4. Cited By
SensLI: Sensitivity-Based Layer Insertion for Neural Networks
v1v2 (latest)

SensLI: Sensitivity-Based Layer Insertion for Neural Networks

27 November 2023
Evelyn Herberg
Roland A. Herzog
Frederik Köhne
Leonie Kreis
Anton Schiela
ArXiv (abs)PDFHTMLGithub (2★)

Papers citing "SensLI: Sensitivity-Based Layer Insertion for Neural Networks"

17 / 17 papers shown
Title
Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them
  Optimally
Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them Optimally
Manon Verbockhaven
Sylvain Chevallier
Guillaume Charpiat
60
4
0
30 May 2024
Self-Expanding Neural Networks
Self-Expanding Neural Networks
Rupert Mitchell
Robin Menzenbach
Kristian Kersting
Martin Mundt
81
9
0
10 Jul 2023
When, where, and how to add new neurons to ANNs
When, where, and how to add new neurons to ANNs
Kaitlin Maile
Emmanuel Rachelson
H. Luga
Dennis G. Wilson
51
16
0
17 Feb 2022
GradMax: Growing Neural Networks using Gradient Information
GradMax: Growing Neural Networks using Gradient Information
Utku Evci
B. V. Merrienboer
Thomas Unterthiner
Max Vladymyrov
Fabian Pedregosa
66
56
0
13 Jan 2022
Firefly Neural Architecture Descent: a General Approach for Growing
  Neural Networks
Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks
Lemeng Wu
Bo Liu
Peter Stone
Qiang Liu
89
55
0
17 Feb 2021
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
529
42,559
0
03 Dec 2019
Splitting Steepest Descent for Growing Neural Architectures
Splitting Steepest Descent for Growing Neural Architectures
Qiang Liu
Lemeng Wu
Dilin Wang
82
63
0
06 Oct 2019
AutoGrow: Automatic Layer Growing in Deep Convolutional Networks
AutoGrow: Automatic Layer Growing in Deep Convolutional Networks
W. Wen
Feng Yan
Yiran Chen
H. Li
63
40
0
07 Jun 2019
MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep
  Networks
MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks
A. Gordon
Elad Eban
Ofir Nachum
Bo Chen
Hao Wu
Tien-Ju Yang
Edward Choi
67
339
0
18 Nov 2017
NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm
NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm
Xiaoliang Dai
Hongxu Yin
N. Jha
DD
79
238
0
06 Nov 2017
Multi-level Residual Networks from Dynamical Systems View
Multi-level Residual Networks from Dynamical Systems View
B. Chang
Lili Meng
E. Haber
Frederick Tung
David Begert
79
172
0
27 Oct 2017
Stable Architectures for Deep Neural Networks
Stable Architectures for Deep Neural Networks
E. Haber
Lars Ruthotto
152
733
0
09 May 2017
AdaNet: Adaptive Structural Learning of Artificial Neural Networks
AdaNet: Adaptive Structural Learning of Artificial Neural Networks
Corinna Cortes
X. Gonzalvo
Vitaly Kuznetsov
M. Mohri
Scott Yang
88
285
0
05 Jul 2016
Optimization Methods for Large-Scale Machine Learning
Optimization Methods for Large-Scale Machine Learning
Léon Bottou
Frank E. Curtis
J. Nocedal
249
3,224
0
15 Jun 2016
Network Morphism
Network Morphism
Tao Wei
Changhu Wang
Y. Rui
Chen Chen
84
177
0
05 Mar 2016
Net2Net: Accelerating Learning via Knowledge Transfer
Net2Net: Accelerating Learning via Knowledge Transfer
Tianqi Chen
Ian Goodfellow
Jonathon Shlens
165
672
0
18 Nov 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
2.0K
150,260
0
22 Dec 2014
1