ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1508.04826
  4. Cited By
Dither is Better than Dropout for Regularising Deep Neural Networks
v1v2 (latest)

Dither is Better than Dropout for Regularising Deep Neural Networks

19 August 2015
Andrew J. R. Simpson
ArXiv (abs)PDFHTML

Papers citing "Dither is Better than Dropout for Regularising Deep Neural Networks"

3 / 3 papers shown
Title
Uniform Learning in a Deep Neural Network via "Oddball" Stochastic
  Gradient Descent
Uniform Learning in a Deep Neural Network via "Oddball" Stochastic Gradient Descent
Andrew J. R. Simpson
FedML
34
2
0
08 Oct 2015
Use it or Lose it: Selective Memory and Forgetting in a Perpetual
  Learning Machine
Use it or Lose it: Selective Memory and Forgetting in a Perpetual Learning Machine
Andrew J. R. Simpson
CLL
21
2
0
10 Sep 2015
On-the-Fly Learning in a Perpetual Learning Machine
On-the-Fly Learning in a Perpetual Learning Machine
Andrew J. R. Simpson
36
9
0
03 Sep 2015
1