ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.07433
  4. Cited By
Active Bias: Training More Accurate Neural Networks by Emphasizing High
  Variance Samples

Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples

24 April 2017
Haw-Shiuan Chang
Erik Learned-Miller
Andrew McCallum
ArXivPDFHTML

Papers citing "Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples"

26 / 26 papers shown
Title
Privacy-Preserving Model and Preprocessing Verification for Machine Learning
Privacy-Preserving Model and Preprocessing Verification for Machine Learning
Wenbiao Li
Anisa Halimi
Xiaoqian Jiang
Jaideep Vaidya
Erman Ayday
AAML
77
0
0
14 Jan 2025
Imbalanced Medical Image Segmentation with Pixel-dependent Noisy Labels
Imbalanced Medical Image Segmentation with Pixel-dependent Noisy Labels
Erjian Guo
Zicheng Wang
Zhen Zhao
Luping Zhou
NoLa
116
0
0
12 Jan 2025
Multi-Label Bayesian Active Learning with Inter-Label Relationships
Multi-Label Bayesian Active Learning with Inter-Label Relationships
Yuanyuan Qi
Jueqing Lu
Xiaohao Yang
Joanne Enticott
Lan Du
119
0
0
26 Nov 2024
Learning with Confident Examples: Rank Pruning for Robust Classification
  with Noisy Labels
Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels
Curtis G. Northcutt
Tailin Wu
Isaac L. Chuang
NoLa
34
157
0
04 May 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
257
4,612
0
10 Nov 2016
Entropy-SGD: Biasing Gradient Descent Into Wide Valleys
Entropy-SGD: Biasing Gradient Descent Into Wide Valleys
Pratik Chaudhari
A. Choromańska
Stefano Soatto
Yann LeCun
Carlo Baldassi
C. Borgs
J. Chayes
Levent Sagun
R. Zecchina
ODL
82
769
0
06 Nov 2016
Toward Implicit Sample Noise Modeling: Deviation-driven Matrix
  Factorization
Toward Implicit Sample Noise Modeling: Deviation-driven Matrix Factorization
Guang-He Lee
Shao-Wen Yang
Shou-de Lin
18
2
0
28 Oct 2016
Mollifying Networks
Mollifying Networks
Çağlar Gülçehre
Marcin Moczulski
Francesco Visin
Yoshua Bengio
38
46
0
17 Aug 2016
Learning to learn by gradient descent by gradient descent
Learning to learn by gradient descent by gradient descent
Marcin Andrychowicz
Misha Denil
Sergio Gomez Colmenarejo
Matthew W. Hoffman
David Pfau
Tom Schaul
Brendan Shillingford
Nando de Freitas
77
2,000
0
14 Jun 2016
Robust Probabilistic Modeling with Bayesian Data Reweighting
Robust Probabilistic Modeling with Bayesian Data Reweighting
Yixin Wang
A. Kucukelbir
David M. Blei
OOD
NoLa
32
12
0
13 Jun 2016
Training Region-based Object Detectors with Online Hard Example Mining
Training Region-based Object Detectors with Online Hard Example Mining
Abhinav Shrivastava
Abhinav Gupta
Ross B. Girshick
ObjD
119
2,411
0
12 Apr 2016
A Variational Analysis of Stochastic Gradient Algorithms
A Variational Analysis of Stochastic Gradient Algorithms
Stephan Mandt
Matthew D. Hoffman
David M. Blei
36
157
0
08 Feb 2016
Active Sampler: Light-weight Accelerator for Complex Data Analytics at
  Scale
Active Sampler: Light-weight Accelerator for Complex Data Analytics at Scale
Jinyang Gao
H. V. Jagadish
Beng Chin Ooi
37
18
0
12 Dec 2015
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.3K
192,638
0
10 Dec 2015
Variance Reduction in SGD by Distributed Importance Sampling
Variance Reduction in SGD by Distributed Importance Sampling
Guillaume Alain
Alex Lamb
Chinnadhurai Sankar
Aaron Courville
Yoshua Bengio
FedML
44
198
0
20 Nov 2015
Online Batch Selection for Faster Training of Neural Networks
Online Batch Selection for Faster Training of Neural Networks
I. Loshchilov
Frank Hutter
ODL
62
299
0
19 Nov 2015
What Objective Does Self-paced Learning Indeed Optimize?
What Objective Does Self-paced Learning Indeed Optimize?
Deyu Meng
Qian Zhao
Lu Jiang
53
77
0
19 Nov 2015
Stochastic Gradient Made Stable: A Manifold Propagation Approach for
  Large-Scale Optimization
Stochastic Gradient Made Stable: A Manifold Propagation Approach for Large-Scale Optimization
Yadong Mu
Wei Liu
Wei Fan
45
33
0
28 Jun 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
198
19,448
0
09 Mar 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
736
149,474
0
22 Dec 2014
Variational Tempering
Variational Tempering
Stephan Mandt
James McInerney
Farhan Abrol
Rajesh Ranganath
David M. Blei
BDL
46
55
0
07 Nov 2014
Convolutional Neural Networks for Sentence Classification
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
541
13,395
0
25 Aug 2014
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
138
738
0
19 Mar 2014
No More Pesky Learning Rates
No More Pesky Learning Rates
Tom Schaul
Sixin Zhang
Yann LeCun
77
477
0
06 Jun 2012
Bayesian Active Learning for Classification and Preference Learning
Bayesian Active Learning for Classification and Preference Learning
N. Houlsby
Ferenc Huszár
Zoubin Ghahramani
M. Lengyel
61
901
0
24 Dec 2011
Natural Language Processing (almost) from Scratch
Natural Language Processing (almost) from Scratch
R. Collobert
Jason Weston
Léon Bottou
Michael Karlen
Koray Kavukcuoglu
Pavel P. Kuksa
121
7,711
0
02 Mar 2011
1