ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.02375
109
164
v1v2v3 (latest)

ShakeDrop Regularization for Deep Residual Learning

7 February 2018
Yoshihiro Yamada
Masakazu Iwamura
Takuya Akiba
K. Kise
ArXiv (abs)PDFHTML
Abstract

Overfitting is a crucial problem in deep neural networks, even in the latest network architectures. In this paper, so as to relieve the overfitting effect of ResNet and its improvements (i.e., Wide ResNet, PyramidNet and ResNeXt), we propose a new regularization method, named ShakeDrop regularization. ShakeDrop is inspired by Shake-Shake, which is an effective regularization method but can be applied to only ResNeXt. ShakeDrop is even more effective than Shake-Shake and can be successfully applied to not only ResNeXt but also ResNet, Wide ResNet and PyramidNet. The important key to realize ShakeDrop is stability of training. Since effective regularization often causes unstable training, we introduce a stabilizer of training which is an unusual usage of an existing regularizer. Experiments reveals that ShakeDrop achieves comparable or superior generalization performance to conventional methods.

View on arXiv
Comments on this paper