ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.10941
  4. Cited By
Spectral Norm Regularization for Improving the Generalizability of Deep
  Learning

Spectral Norm Regularization for Improving the Generalizability of Deep Learning

31 May 2017
Yuichi Yoshida
Takeru Miyato
ArXivPDFHTML

Papers citing "Spectral Norm Regularization for Improving the Generalizability of Deep Learning"

15 / 65 papers shown
Title
Absum: Simple Regularization Method for Reducing Structural Sensitivity
  of Convolutional Neural Networks
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Sekitoshi Kanai
Yasutoshi Ida
Yasuhiro Fujiwara
Masanori Yamada
S. Adachi
AAML
15
1
0
19 Sep 2019
A Frobenius norm regularization method for convolutional kernels to
  avoid unstable gradient problem
A Frobenius norm regularization method for convolutional kernels to avoid unstable gradient problem
Pei-Chang Guo
26
5
0
25 Jul 2019
ReachNN: Reachability Analysis of Neural-Network Controlled Systems
ReachNN: Reachability Analysis of Neural-Network Controlled Systems
Chao Huang
Jiameng Fan
Wenchao Li
Xin Chen
Qi Zhu
28
78
0
25 Jun 2019
Generative Adversarial Networks in Computer Vision: A Survey and
  Taxonomy
Generative Adversarial Networks in Computer Vision: A Survey and Taxonomy
Zhengwei Wang
Qi She
T. Ward
MedIm
EGVM
29
90
0
04 Jun 2019
PowerSGD: Practical Low-Rank Gradient Compression for Distributed
  Optimization
PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
Thijs Vogels
Sai Praneeth Karimireddy
Martin Jaggi
19
317
0
31 May 2019
Robust Sparse Regularization: Simultaneously Optimizing Neural Network
  Robustness and Compactness
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness
Adnan Siraj Rakin
Zhezhi He
Li Yang
Yanzhi Wang
Liqiang Wang
Deliang Fan
AAML
40
21
0
30 May 2019
An Empirical Study of Large-Batch Stochastic Gradient Descent with
  Structured Covariance Noise
An Empirical Study of Large-Batch Stochastic Gradient Descent with Structured Covariance Noise
Yeming Wen
Kevin Luk
Maxime Gazeau
Guodong Zhang
Harris Chan
Jimmy Ba
ODL
20
22
0
21 Feb 2019
Implicit Self-Regularization in Deep Neural Networks: Evidence from
  Random Matrix Theory and Implications for Learning
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
38
191
0
02 Oct 2018
A Kernel Perspective for Regularizing Deep Neural Networks
A Kernel Perspective for Regularizing Deep Neural Networks
A. Bietti
Grégoire Mialon
Dexiong Chen
Julien Mairal
11
15
0
30 Sep 2018
The Singular Values of Convolutional Layers
The Singular Values of Convolutional Layers
Hanie Sedghi
Vineet Gupta
Philip M. Long
FAtt
36
200
0
26 May 2018
L2-Nonexpansive Neural Networks
L2-Nonexpansive Neural Networks
Haifeng Qian
M. Wegman
25
74
0
22 Feb 2018
Spectral Normalization for Generative Adversarial Networks
Spectral Normalization for Generative Adversarial Networks
Takeru Miyato
Toshiki Kataoka
Masanori Koyama
Yuichi Yoshida
ODL
35
4,399
0
16 Feb 2018
Gradient Regularization Improves Accuracy of Discriminative Models
Gradient Regularization Improves Accuracy of Discriminative Models
D. Varga
Adrián Csiszárik
Zsolt Zombori
18
53
0
28 Dec 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
183
1,185
0
30 Nov 2014
Previous
12