ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.05765
  4. Cited By
"Oddball SGD": Novelty Driven Stochastic Gradient Descent for Training
  Deep Neural Networks

"Oddball SGD": Novelty Driven Stochastic Gradient Descent for Training Deep Neural Networks

18 September 2015
Andrew J. R. Simpson
ArXiv (abs)PDFHTML

Papers citing ""Oddball SGD": Novelty Driven Stochastic Gradient Descent for Training Deep Neural Networks"

2 / 2 papers shown
Title
Online Batch Selection for Faster Training of Neural Networks
Online Batch Selection for Faster Training of Neural Networks
I. Loshchilov
Frank Hutter
ODL
127
302
0
19 Nov 2015
Uniform Learning in a Deep Neural Network via "Oddball" Stochastic
  Gradient Descent
Uniform Learning in a Deep Neural Network via "Oddball" Stochastic Gradient Descent
Andrew J. R. Simpson
FedML
34
2
0
08 Oct 2015
1