ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.05965
  4. Cited By
MaxGain: Regularisation of Neural Networks by Constraining Activation
  Magnitudes

MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes

16 April 2018
Henry Gouk
Bernhard Pfahringer
E. Frank
M. Cree
ArXivPDFHTML

Papers citing "MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes"

7 / 7 papers shown
Title
Nonlinearity Enhanced Adaptive Activation Functions
Nonlinearity Enhanced Adaptive Activation Functions
David Yevick
65
1
0
29 Mar 2024
Regularisation of Neural Networks by Enforcing Lipschitz Continuity
Regularisation of Neural Networks by Enforcing Lipschitz Continuity
Henry Gouk
E. Frank
Bernhard Pfahringer
M. Cree
170
478
0
12 Apr 2018
Concrete Dropout
Concrete Dropout
Y. Gal
Jiri Hron
Alex Kendall
BDL
UQCV
179
592
0
22 May 2017
Improved Training of Wasserstein GANs
Improved Training of Wasserstein GANs
Ishaan Gulrajani
Faruk Ahmed
Martín Arjovsky
Vincent Dumoulin
Aaron Courville
GAN
207
9,548
0
31 Mar 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
339
4,629
0
10 Nov 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
340
7,985
0
23 May 2016
Variational Dropout and the Local Reparameterization Trick
Variational Dropout and the Local Reparameterization Trick
Diederik P. Kingma
Tim Salimans
Max Welling
BDL
226
1,514
0
08 Jun 2015
1