ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.08771
  4. Cited By
Submodular Batch Selection for Training Deep Neural Networks

Submodular Batch Selection for Training Deep Neural Networks

20 June 2019
K. J. Joseph
R. VamshiTeja
Krishnakant Singh
V. Balasubramanian
ArXiv (abs)PDFHTML

Papers citing "Submodular Batch Selection for Training Deep Neural Networks"

17 / 17 papers shown
Title
Self-Paced Learning with Adaptive Deep Visual Embeddings
Self-Paced Learning with Adaptive Deep Visual Embeddings
Vithursan Thangarasa
Graham W. Taylor
3DHSSL
45
6
0
24 Jul 2018
Active Mini-Batch Sampling using Repulsive Point Processes
Active Mini-Batch Sampling using Repulsive Point Processes
Cheng Zhang
Cengiz Öztireli
Stephan Mandt
G. Salvi
34
36
0
08 Apr 2018
Not All Samples Are Created Equal: Deep Learning with Importance
  Sampling
Not All Samples Are Created Equal: Deep Learning with Importance Sampling
Angelos Katharopoulos
François Fleuret
95
520
0
02 Mar 2018
Biased Importance Sampling for Deep Neural Network Training
Biased Importance Sampling for Deep Neural Network Training
Angelos Katharopoulos
François Fleuret
53
69
0
31 May 2017
Determinantal Point Processes for Mini-Batch Diversification
Determinantal Point Processes for Mini-Batch Diversification
Cheng Zhang
Hedvig Kjellström
Stephan Mandt
69
35
0
01 May 2017
Active Bias: Training More Accurate Neural Networks by Emphasizing High
  Variance Samples
Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples
Haw-Shiuan Chang
Erik Learned-Miller
Andrew McCallum
75
353
0
24 Apr 2017
Fast DPP Sampling for Nyström with Application to Kernel Methods
Fast DPP Sampling for Nyström with Application to Kernel Methods
Chengtao Li
Stefanie Jegelka
S. Sra
48
76
0
19 Mar 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
101
580
0
18 Mar 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,020
0
10 Dec 2015
Variance Reduction in SGD by Distributed Importance Sampling
Variance Reduction in SGD by Distributed Importance Sampling
Guillaume Alain
Alex Lamb
Chinnadhurai Sankar
Aaron Courville
Yoshua Bengio
FedML
79
199
0
20 Nov 2015
Online Batch Selection for Faster Training of Neural Networks
Online Batch Selection for Faster Training of Neural Networks
I. Loshchilov
Frank Hutter
ODL
89
301
0
19 Nov 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
463
43,305
0
11 Feb 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.8K
150,115
0
22 Dec 2014
Distributed Submodular Maximization
Distributed Submodular Maximization
Baharan Mirzasoleiman
Amin Karbasi
Rik Sarkar
Andreas Krause
77
206
0
03 Nov 2014
Lazier Than Lazy Greedy
Lazier Than Lazy Greedy
Baharan Mirzasoleiman
Ashwinkumar Badanidiyuru
Amin Karbasi
J. Vondrák
Andreas Krause
89
406
0
28 Sep 2014
Accelerating Minibatch Stochastic Gradient Descent using Stratified
  Sampling
Accelerating Minibatch Stochastic Gradient Descent using Stratified Sampling
P. Zhao
Tong Zhang
76
91
0
13 May 2014
Stochastic Optimization with Importance Sampling
Stochastic Optimization with Importance Sampling
P. Zhao
Tong Zhang
93
345
0
13 Jan 2014
1