ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.01967
  4. Cited By
Regularizing CNNs with Locally Constrained Decorrelations

Regularizing CNNs with Locally Constrained Decorrelations

7 November 2016
Pau Rodríguez López
Jordi Gonzalez
Guillem Cucurull
J. M. Gonfaus
F. X. Roca
ArXivPDFHTML

Papers citing "Regularizing CNNs with Locally Constrained Decorrelations"

29 / 79 papers shown
Title
The Expressivity and Training of Deep Neural Networks: toward the Edge
  of Chaos?
The Expressivity and Training of Deep Neural Networks: toward the Edge of Chaos?
Gege Zhang
Gang-cheng Li
Ningwei Shen
Weidong Zhang
14
6
0
11 Oct 2019
Continual Learning in Neural Networks
Continual Learning in Neural Networks
Rahaf Aljundi
CLL
25
33
0
07 Oct 2019
ASNI: Adaptive Structured Noise Injection for shallow and deep neural
  networks
ASNI: Adaptive Structured Noise Injection for shallow and deep neural networks
Beyrem Khalfaoui
Joseph Boyd
Jean-Philippe Vert
22
3
0
21 Sep 2019
Regularizing Neural Networks via Minimizing Hyperspherical Energy
Regularizing Neural Networks via Minimizing Hyperspherical Energy
Rongmei Lin
Weiyang Liu
Zhen Liu
Chen Feng
Zhiding Yu
James M. Rehg
Li Xiong
Le Song
17
35
0
12 Jun 2019
P3SGD: Patient Privacy Preserving SGD for Regularizing Deep CNNs in
  Pathological Image Classification
P3SGD: Patient Privacy Preserving SGD for Regularizing Deep CNNs in Pathological Image Classification
Bingzhe Wu
Shiwan Zhao
Guangyu Sun
Xiaolu Zhang
Zhong Su
C. Zeng
Zhihong Liu
22
39
0
30 May 2019
Locality-Promoting Representation Learning
Locality-Promoting Representation Learning
Johannes Schneider
12
2
0
25 May 2019
How degenerate is the parametrization of neural networks with the ReLU
  activation function?
How degenerate is the parametrization of neural networks with the ReLU activation function?
Julius Berner
Dennis Elbrächter
Philipp Grohs
ODL
25
28
0
23 May 2019
Minimal model of permutation symmetry in unsupervised learning
Minimal model of permutation symmetry in unsupervised learning
Tianqi Hou
K. Y. Michael Wong
Haiping Huang
6
19
0
30 Apr 2019
Deep Multi-View Learning using Neuron-Wise Correlation-Maximizing
  Regularizers
Deep Multi-View Learning using Neuron-Wise Correlation-Maximizing Regularizers
Kui Jia
Jiehong Lin
Mingkui Tan
Dacheng Tao
3DV
25
32
0
25 Apr 2019
LP-3DCNN: Unveiling Local Phase in 3D Convolutional Neural Networks
LP-3DCNN: Unveiling Local Phase in 3D Convolutional Neural Networks
Sudhakar Kumawat
Shanmuganathan Raman
3DPC
13
77
0
06 Apr 2019
Iterative Normalization: Beyond Standardization towards Efficient
  Whitening
Iterative Normalization: Beyond Standardization towards Efficient Whitening
Lei Huang
Yi Zhou
Fan Zhu
Li Liu
Ling Shao
24
140
0
06 Apr 2019
On Correlation of Features Extracted by Deep Neural Networks
On Correlation of Features Extracted by Deep Neural Networks
B. Ayinde
T. Inanc
J. Zurada
11
25
0
30 Jan 2019
Diversity Regularized Adversarial Learning
Diversity Regularized Adversarial Learning
B. Ayinde
Keishin Nishihama
J. Zurada
GAN
14
1
0
30 Jan 2019
Leveraging Filter Correlations for Deep Model Compression
Leveraging Filter Correlations for Deep Model Compression
Pravendra Singh
Vinay K. Verma
Piyush Rai
Vinay P. Namboodiri
8
65
0
26 Nov 2018
RePr: Improved Training of Convolutional Filters
RePr: Improved Training of Convolutional Filters
Aaditya (Adi) Prakash
J. Storer
D. Florêncio
Cha Zhang
VLM
CVBM
27
57
0
18 Nov 2018
Can We Gain More from Orthogonality Regularizations in Training Deep
  CNNs?
Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?
Nitin Bansal
Xiaohan Chen
Zhangyang Wang
OOD
14
189
0
22 Oct 2018
Removing the Feature Correlation Effect of Multiplicative Noise
Removing the Feature Correlation Effect of Multiplicative Noise
Zijun Zhang
Yining Zhang
Zongpeng Li
13
8
0
19 Sep 2018
Deep Asymmetric Networks with a Set of Node-wise Variant Activation
  Functions
Deep Asymmetric Networks with a Set of Node-wise Variant Activation Functions
Jinhyeok Jang
Hyunjoong Cho
Jaehong Kim
Jaeyeon Lee
Seungjoon Yang
18
2
0
11 Sep 2018
Filter Distillation for Network Compression
Filter Distillation for Network Compression
Xavier Suau
Luca Zappella
N. Apostoloff
16
38
0
20 Jul 2018
Analysis of Invariance and Robustness via Invertibility of ReLU-Networks
Analysis of Invariance and Robustness via Invertibility of ReLU-Networks
Jens Behrmann
Sören Dittmer
Pascal Fernsel
Peter Maass
22
12
0
25 Jun 2018
Selfless Sequential Learning
Selfless Sequential Learning
Rahaf Aljundi
Marcus Rohrbach
Tinne Tuytelaars
CLL
28
114
0
14 Jun 2018
Learning towards Minimum Hyperspherical Energy
Learning towards Minimum Hyperspherical Energy
Weiyang Liu
Rongmei Lin
Ziqiang Liu
Lixin Liu
Zhiding Yu
Bo Dai
Le Song
22
145
0
23 May 2018
Decorrelated Batch Normalization
Decorrelated Batch Normalization
Lei Huang
Dawei Yang
B. Lang
Jia Deng
13
190
0
23 Apr 2018
Building Efficient ConvNets using Redundant Feature Pruning
Building Efficient ConvNets using Redundant Feature Pruning
B. Ayinde
J. Zurada
VLM
3DPC
24
47
0
21 Feb 2018
Learning Less-Overlapping Representations
Learning Less-Overlapping Representations
P. Xie
Hongbao Zhang
Eric P. Xing
22
3
0
25 Nov 2017
Compression-aware Training of Deep Networks
Compression-aware Training of Deep Networks
J. Álvarez
Mathieu Salzmann
16
172
0
07 Nov 2017
Orthogonal Weight Normalization: Solution to Optimization over Multiple
  Dependent Stiefel Manifolds in Deep Neural Networks
Orthogonal Weight Normalization: Solution to Optimization over Multiple Dependent Stiefel Manifolds in Deep Neural Networks
Lei Huang
Xianglong Liu
B. Lang
Adams Wei Yu
Yongliang Wang
Bo Li
ODL
27
223
0
16 Sep 2017
Non-linear Convolution Filters for CNN-based Learning
Non-linear Convolution Filters for CNN-based Learning
Georgios Zoumpourlis
Alexandros Doumanoglou
N. Vretos
P. Daras
32
93
0
23 Aug 2017
Building effective deep neural network architectures one feature at a
  time
Building effective deep neural network architectures one feature at a time
Martin Mundt
Tobias Weis
K. Konda
Visvanathan Ramesh
22
1
0
18 May 2017
Previous
12