ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1504.07676
  4. Cited By
Explaining the Success of AdaBoost and Random Forests as Interpolating
  Classifiers
v1v2 (latest)

Explaining the Success of AdaBoost and Random Forests as Interpolating Classifiers

28 April 2015
A. Wyner
Matthew A. Olson
J. Bleich
David Mease
ArXiv (abs)PDFHTML

Papers citing "Explaining the Success of AdaBoost and Random Forests as Interpolating Classifiers"

45 / 45 papers shown
Title
Randomization Can Reduce Both Bias and Variance: A Case Study in Random Forests
Randomization Can Reduce Both Bias and Variance: A Case Study in Random Forests
Brian Liu
Rahul Mazumder
137
1
0
20 Feb 2024
The Surprising Harmfulness of Benign Overfitting for Adversarial
  Robustness
The Surprising Harmfulness of Benign Overfitting for Adversarial Robustness
Yifan Hao
Tong Zhang
AAML
139
5
0
19 Jan 2024
Generalized equivalences between subsampling and ridge regularization
Generalized equivalences between subsampling and ridge regularization
Pratik V. Patil
Jin-Hong Du
84
5
0
29 May 2023
Do Neural Networks Generalize from Self-Averaging Sub-classifiers in the
  Same Way As Adaptive Boosting?
Do Neural Networks Generalize from Self-Averaging Sub-classifiers in the Same Way As Adaptive Boosting?
Michael Sun
Peter Chatain
AI4CE
71
0
0
14 Feb 2023
Bit Error and Block Error Rate Training for ML-Assisted Communication
Bit Error and Block Error Rate Training for ML-Assisted Communication
Reinhard Wiesmayr
Gian Marti
C. Dick
Haochuan Song
Christoph Studer
90
9
0
25 Oct 2022
Benefit of Interpolation in Nearest Neighbor Algorithms
Benefit of Interpolation in Nearest Neighbor Algorithms
Yue Xing
Qifan Song
Guang Cheng
84
30
0
23 Feb 2022
Is interpolation benign for random forest regression?
Is interpolation benign for random forest regression?
Ludovic Arnould
Claire Boyer
Erwan Scornet
66
6
0
08 Feb 2022
VisRuler: Visual Analytics for Extracting Decision Rules from Bagged and
  Boosted Decision Trees
VisRuler: Visual Analytics for Extracting Decision Rules from Bagged and Boosted Decision Trees
Angelos Chatzimparmpas
Rafael M. Martins
Andreas Kerren
94
11
0
01 Dec 2021
VC dimension of partially quantized neural networks in the
  overparametrized regime
VC dimension of partially quantized neural networks in the overparametrized regime
Yutong Wang
Clayton D. Scott
76
1
0
06 Oct 2021
Classification and Adversarial examples in an Overparameterized Linear
  Model: A Signal Processing Perspective
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective
Adhyyan Narang
Vidya Muthukumar
A. Sahai
SILMAAML
64
1
0
27 Sep 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
117
72
0
06 Sep 2021
Fit without fear: remarkable mathematical phenomena of deep learning
  through the prism of interpolation
Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
M. Belkin
65
186
0
29 May 2021
Latent Gaussian Model Boosting
Latent Gaussian Model Boosting
Fabio Sigrist
AI4CE
32
23
0
19 May 2021
AdaBoost and robust one-bit compressed sensing
AdaBoost and robust one-bit compressed sensing
Geoffrey Chinot
Felix Kuchelmeister
Matthias Löffler
Sara van de Geer
82
5
0
05 May 2021
How rotational invariance of common kernels prevents generalization in
  high dimensions
How rotational invariance of common kernels prevents generalization in high dimensions
Konstantin Donhauser
Mingqi Wu
Fanny Yang
80
24
0
09 Apr 2021
Fitting Elephants
Fitting Elephants
P. Mitra
18
0
0
31 Mar 2021
Trees, Forests, Chickens, and Eggs: When and Why to Prune Trees in a
  Random Forest
Trees, Forests, Chickens, and Eggs: When and Why to Prune Trees in a Random Forest
Siyu Zhou
L. Mentch
55
22
0
30 Mar 2021
Deep learning: a statistical viewpoint
Deep learning: a statistical viewpoint
Peter L. Bartlett
Andrea Montanari
Alexander Rakhlin
73
279
0
16 Mar 2021
Slow-Growing Trees
Slow-Growing Trees
Philippe Goulet Coulombe
57
1
0
02 Mar 2021
Global convergence of Negative Correlation Extreme Learning Machine
Global convergence of Negative Correlation Extreme Learning Machine
Carlos Perales-González
119
2
0
30 Sep 2020
Distributional Generalization: A New Kind of Generalization
Distributional Generalization: A New Kind of Generalization
Preetum Nakkiran
Yamini Bansal
OOD
78
42
0
17 Sep 2020
Data-assisted combustion simulations with dynamic submodel assignment
  using random forests
Data-assisted combustion simulations with dynamic submodel assignment using random forests
Wai Tong Chung
A. Mishra
N. Perakis
M. Ihme
125
43
0
08 Sep 2020
To Bag is to Prune
To Bag is to Prune
Philippe Goulet Coulombe
UQCV
40
9
0
17 Aug 2020
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
Vitaly Feldman
Chiyuan Zhang
TDI
245
472
0
09 Aug 2020
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
157
61
0
03 Aug 2020
Boosting algorithms in energy research: A systematic review
Boosting algorithms in energy research: A systematic review
Hristos Tyralis
Georgia Papacharalampous
93
45
0
01 Apr 2020
Margin Maximization as Lossless Maximal Compression
Margin Maximization as Lossless Maximal Compression
N. Nikolaou
Henry W. J. Reeve
Gavin Brown
33
4
0
28 Jan 2020
Simplifying Random Forests: On the Trade-off between Interpretability
  and Accuracy
Simplifying Random Forests: On the Trade-off between Interpretability and Accuracy
Michael Rapp
E. Mencía
Johannes Furnkranz
FAtt
100
6
0
11 Nov 2019
Randomization as Regularization: A Degrees of Freedom Explanation for
  Random Forest Success
Randomization as Regularization: A Degrees of Freedom Explanation for Random Forest Success
L. Mentch
Siyu Zhou
67
71
0
01 Nov 2019
The Implicit Regularization of Ordinary Least Squares Ensembles
The Implicit Regularization of Ordinary Least Squares Ensembles
Daniel LeJeune
Hamid Javadi
Richard G. Baraniuk
133
43
0
10 Oct 2019
Greedy Convex Ensemble
Greedy Convex Ensemble
Thanh Tan Nguyen
N. Ye
Peter L. Bartlett
31
1
0
09 Oct 2019
Keeping Designers in the Loop: Communicating Inherent Algorithmic
  Trade-offs Across Multiple Objectives
Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-offs Across Multiple Objectives
Bowen Yu
Ye Yuan
Loren G. Terveen
Zhiwei Steven Wu
Jodi Forlizzi
Haiyi Zhu
71
2
0
07 Oct 2019
Improving Outbreak Detection with Stacking of Statistical Surveillance
  Methods
Improving Outbreak Detection with Stacking of Statistical Surveillance Methods
Moritz Kulessa
E. Mencía
Johannes Furnkranz
15
2
0
17 Jul 2019
Does Learning Require Memorization? A Short Tale about a Long Tail
Does Learning Require Memorization? A Short Tale about a Long Tail
Vitaly Feldman
TDI
165
503
0
12 Jun 2019
KTBoost: Combined Kernel and Tree Boosting
KTBoost: Combined Kernel and Tree Boosting
Fabio Sigrist
69
27
0
11 Feb 2019
Consistency of Interpolation with Laplace Kernels is a High-Dimensional
  Phenomenon
Consistency of Interpolation with Laplace Kernels is a High-Dimensional Phenomenon
Alexander Rakhlin
Xiyu Zhai
127
79
0
28 Dec 2018
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
261
1,663
0
28 Dec 2018
Unknown Examples & Machine Learning Model Generalization
Unknown Examples & Machine Learning Model Generalization
Yeounoh Chung
P. Haas
E. Upfal
Tim Kraska
OOD
112
32
0
24 Aug 2018
Gradient and Newton Boosting for Classification and Regression
Gradient and Newton Boosting for Classification and Regression
Fabio Sigrist
69
62
0
09 Aug 2018
Overfitting or perfect fitting? Risk bounds for classification and
  regression rules that interpolate
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate
M. Belkin
Daniel J. Hsu
P. Mitra
AI4CE
160
260
0
13 Jun 2018
Sharp Analysis of a Simple Model for Random Forests
Sharp Analysis of a Simple Model for Random Forests
Jason M. Klusowski
FAtt
52
22
0
07 May 2018
To understand deep learning we need to understand kernel learning
To understand deep learning we need to understand kernel learning
M. Belkin
Siyuan Ma
Soumik Mandal
97
420
0
05 Feb 2018
Experimental Identification of Hard Data Sets for Classification and
  Feature Selection Methods with Insights on Method Selection
Experimental Identification of Hard Data Sets for Classification and Feature Selection Methods with Insights on Method Selection
Cuiju Luan
Guozhu Dong
114
17
0
24 Mar 2017
An update on statistical boosting in biomedicine
An update on statistical boosting in biomedicine
A. Mayr
B. Hofner
Elisabeth Waldmann
Tobias Hepp
O. Gefeller
S. Meyer
40
32
0
27 Feb 2017
Sparse Projection Oblique Randomer Forests
Sparse Projection Oblique Randomer Forests
Tyler M. Tomita
J. Browne
Cencheng Shen
Jaewon Chung
Jesse Patsolic
...
Jason Yim
Carey E. Priebe
Randal C. Burns
Mauro Maggioni
Joshua T. Vogelstein
108
44
0
10 Jun 2015
1