ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.08148
  4. Cited By
Improving Deep Neural Network Random Initialization Through Neuronal
  Rewiring

Improving Deep Neural Network Random Initialization Through Neuronal Rewiring

17 July 2022
Leonardo F. S. Scabini
B. De Baets
Odemir M. Bruno
    AI4CE
ArXivPDFHTML

Papers citing "Improving Deep Neural Network Random Initialization Through Neuronal Rewiring"

38 / 38 papers shown
Title
Neural Networks Trained by Weight Permutation are Universal Approximators
Neural Networks Trained by Weight Permutation are Universal Approximators
Yongqiang Cai
Gaohang Chen
Zhonghua Qiao
125
1
0
01 Jul 2024
Good Seed Makes a Good Crop: Discovering Secret Seeds in Text-to-Image Diffusion Models
Good Seed Makes a Good Crop: Discovering Secret Seeds in Text-to-Image Diffusion Models
Katherine Xu
Lingzhi Zhang
Jianbo Shi
89
14
0
23 May 2024
Patches Are All You Need?
Patches Are All You Need?
Asher Trockman
J. Zico Kolter
ViT
250
407
0
24 Jan 2022
Vision Transformer for Small-Size Datasets
Vision Transformer for Small-Size Datasets
Seung Hoon Lee
Seunghyun Lee
B. Song
ViT
64
227
0
27 Dec 2021
Quantifying Epistemic Uncertainty in Deep Learning
Quantifying Epistemic Uncertainty in Deep Learning
Ziyi Huang
Henry Lam
Haofeng Zhang
UQCV
BDL
UD
PER
46
14
0
23 Oct 2021
Characterizing Learning Dynamics of Deep Neural Networks via Complex
  Networks
Characterizing Learning Dynamics of Deep Neural Networks via Complex Networks
Emanuele La Malfa
G. Malfa
Giuseppe Nicosia
Vito Latora
49
10
0
06 Oct 2021
ResNet strikes back: An improved training procedure in timm
ResNet strikes back: An improved training procedure in timm
Ross Wightman
Hugo Touvron
Hervé Jégou
AI4TS
242
492
0
01 Oct 2021
Torch.manual_seed(3407) is all you need: On the influence of random
  seeds in deep learning architectures for computer vision
Torch.manual_seed(3407) is all you need: On the influence of random seeds in deep learning architectures for computer vision
David Picard
3DV
VLM
50
90
0
16 Sep 2021
Structure and Performance of Fully Connected Neural Networks: Emerging
  Complex Network Properties
Structure and Performance of Fully Connected Neural Networks: Emerging Complex Network Properties
Leonardo F. S. Scabini
Odemir M. Bruno
GNN
31
52
0
29 Jul 2021
A Survey on Visual Transformer
A Survey on Visual Transformer
Kai Han
Yunhe Wang
Hanting Chen
Xinghao Chen
Jianyuan Guo
...
Chunjing Xu
Yixing Xu
Zhaohui Yang
Yiman Zhang
Dacheng Tao
ViT
165
2,202
0
23 Dec 2020
Effect of the initial configuration of weights on the training and
  function of artificial neural networks
Effect of the initial configuration of weights on the training and function of artificial neural networks
Ricardo J. Jesus
Mário Antunes
R. A. D. Costa
S. Dorogovtsev
J. F. F. Mendes
R. Aguiar
51
15
0
04 Dec 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
532
40,739
0
22 Oct 2020
Graph Structure of Neural Networks
Graph Structure of Neural Networks
Jiaxuan You
J. Leskovec
Kaiming He
Saining Xie
GNN
67
139
0
13 Jul 2020
The Early Phase of Neural Network Training
The Early Phase of Neural Network Training
Jonathan Frankle
D. Schwab
Ari S. Morcos
79
172
0
24 Feb 2020
Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear
  Networks
Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks
Wei Hu
Lechao Xiao
Jeffrey Pennington
57
113
0
16 Jan 2020
Emergence of Network Motifs in Deep Neural Networks
Emergence of Network Motifs in Deep Neural Networks
Matteo Zambra
Alberto Testolin
A. Maritan
GNN
54
13
0
27 Dec 2019
What's Hidden in a Randomly Weighted Neural Network?
What's Hidden in a Randomly Weighted Neural Network?
Vivek Ramanujan
Mitchell Wortsman
Aniruddha Kembhavi
Ali Farhadi
Mohammad Rastegari
66
356
0
29 Nov 2019
RandAugment: Practical automated data augmentation with a reduced search
  space
RandAugment: Practical automated data augmentation with a reduced search space
E. D. Cubuk
Barret Zoph
Jonathon Shlens
Quoc V. Le
MQ
208
3,480
0
30 Sep 2019
CutMix: Regularization Strategy to Train Strong Classifiers with
  Localizable Features
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
Sangdoo Yun
Dongyoon Han
Seong Joon Oh
Sanghyuk Chun
Junsuk Choe
Y. Yoo
OOD
604
4,766
0
13 May 2019
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
49
386
0
03 May 2019
Exploring Randomly Wired Neural Networks for Image Recognition
Exploring Randomly Wired Neural Networks for Image Recognition
Saining Xie
Alexander Kirillov
Ross B. Girshick
Kaiming He
64
364
0
02 Apr 2019
Do ImageNet Classifiers Generalize to ImageNet?
Do ImageNet Classifiers Generalize to ImageNet?
Benjamin Recht
Rebecca Roelofs
Ludwig Schmidt
Vaishaal Shankar
OOD
SSeg
VLM
103
1,709
0
13 Feb 2019
A Survey of the Recent Architectures of Deep Convolutional Neural
  Networks
A Survey of the Recent Architectures of Deep Convolutional Neural Networks
Asifullah Khan
A. Sohail
Umme Zahoora
Aqsa Saeed Qureshi
OOD
93
2,295
0
17 Jan 2019
Deep learning systems as complex networks
Deep learning systems as complex networks
Alberto Testolin
Michele Piccolini
S. Suweis
AI4CE
BDL
GNN
33
27
0
28 Sep 2018
Learning Overparameterized Neural Networks via Stochastic Gradient
  Descent on Structured Data
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
200
653
0
03 Aug 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
206
3,457
0
09 Mar 2018
mixup: Beyond Empirical Risk Minimization
mixup: Beyond Empirical Risk Minimization
Hongyi Zhang
Moustapha Cissé
Yann N. Dauphin
David Lopez-Paz
NoLa
271
9,743
0
25 Oct 2017
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
248
8,856
0
25 Aug 2017
Random Erasing Data Augmentation
Random Erasing Data Augmentation
Zhun Zhong
Liang Zheng
Guoliang Kang
Shaozi Li
Yi Yang
90
3,630
0
16 Aug 2017
SGDR: Stochastic Gradient Descent with Warm Restarts
SGDR: Stochastic Gradient Descent with Warm Restarts
I. Loshchilov
Frank Hutter
ODL
288
8,091
0
13 Aug 2016
Deep Networks with Stochastic Depth
Deep Networks with Stochastic Depth
Gao Huang
Yu Sun
Zhuang Liu
Daniel Sedra
Kilian Q. Weinberger
193
2,352
0
30 Mar 2016
Weight Normalization: A Simple Reparameterization to Accelerate Training
  of Deep Neural Networks
Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks
Tim Salimans
Diederik P. Kingma
ODL
176
1,940
0
25 Feb 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.9K
193,426
0
10 Dec 2015
Rethinking the Inception Architecture for Computer Vision
Rethinking the Inception Architecture for Computer Vision
Christian Szegedy
Vincent Vanhoucke
Sergey Ioffe
Jonathon Shlens
Z. Wojna
3DV
BDL
734
27,303
0
02 Dec 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
421
43,234
0
11 Feb 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
282
18,587
0
06 Feb 2015
Random Walk Initialization for Training Very Deep Feedforward Networks
Random Walk Initialization for Training Very Deep Feedforward Networks
David Sussillo
L. F. Abbott
50
81
0
19 Dec 2014
Exact solutions to the nonlinear dynamics of learning in deep linear
  neural networks
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
Andrew M. Saxe
James L. McClelland
Surya Ganguli
ODL
162
1,844
0
20 Dec 2013
1