ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.03999
  4. Cited By
KTBoost: Combined Kernel and Tree Boosting
v1v2v3v4v5 (latest)

KTBoost: Combined Kernel and Tree Boosting

11 February 2019
Fabio Sigrist
ArXiv (abs)PDFHTML

Papers citing "KTBoost: Combined Kernel and Tree Boosting"

19 / 19 papers shown
Title
SEEK: Self-adaptive Explainable Kernel For Nonstationary Gaussian Processes
SEEK: Self-adaptive Explainable Kernel For Nonstationary Gaussian Processes
Nima Negarandeh
Carlos Mora
Ramin Bostanabad
88
0
0
18 Mar 2025
On the minimax optimality and superiority of deep neural network
  learning over sparse parameter spaces
On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces
Satoshi Hayakawa
Taiji Suzuki
38
48
0
22 May 2019
Gradient and Newton Boosting for Classification and Regression
Gradient and Newton Boosting for Classification and Regression
Fabio Sigrist
50
62
0
09 Aug 2018
Overfitting or perfect fitting? Risk bounds for classification and
  regression rules that interpolate
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate
M. Belkin
Daniel J. Hsu
P. Mitra
AI4CE
146
259
0
13 Jun 2018
Functional Gradient Boosting based on Residual Network Perception
Functional Gradient Boosting based on Residual Network Perception
Atsushi Nitanda
Taiji Suzuki
63
27
0
25 Feb 2018
Deep Neural Networks Learn Non-Smooth Functions Effectively
Deep Neural Networks Learn Non-Smooth Functions Effectively
Masaaki Imaizumi
Kenji Fukumizu
148
124
0
13 Feb 2018
To understand deep learning we need to understand kernel learning
To understand deep learning we need to understand kernel learning
M. Belkin
Siyuan Ma
Soumik Mandal
70
420
0
05 Feb 2018
TF Boosted Trees: A scalable TensorFlow based framework for gradient
  boosting
TF Boosted Trees: A scalable TensorFlow based framework for gradient boosting
Natalia Ponomareva
Soroush Radpour
Gilbert Hendry
Salem Haykal
Thomas Colthurst
Petr Mitrichev
Alexander Grushetsky
AI4CE
54
23
0
31 Oct 2017
Early stopping for kernel boosting algorithms: A general analysis with
  localized complexities
Early stopping for kernel boosting algorithms: A general analysis with localized complexities
Yuting Wei
Fanny Yang
Martin J. Wainwright
66
77
0
05 Jul 2017
Learning Deep ResNet Blocks Sequentially using Boosting Theory
Learning Deep ResNet Blocks Sequentially using Boosting Theory
Furong Huang
Jordan T. Ash
John Langford
Robert Schapire
86
111
0
15 Jun 2017
Diving into the shallows: a computational perspective on large-scale
  shallow learning
Diving into the shallows: a computational perspective on large-scale shallow learning
Siyuan Ma
M. Belkin
64
78
0
30 Mar 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
348
4,635
0
10 Nov 2016
Estimation and Prediction using generalized Wendland Covariance
  Functions under fixed domain asymptotics
Estimation and Prediction using generalized Wendland Covariance Functions under fixed domain asymptotics
M. Bevilacqua
Tarik Faouzi
Reinhard Furrer
Emilio Porcu
91
75
0
23 Jul 2016
XGBoost: A Scalable Tree Boosting System
XGBoost: A Scalable Tree Boosting System
Tianqi Chen
Carlos Guestrin
814
39,062
0
09 Mar 2016
Explaining the Success of AdaBoost and Random Forests as Interpolating
  Classifiers
Explaining the Success of AdaBoost and Random Forests as Interpolating Classifiers
A. Wyner
Matthew A. Olson
J. Bleich
David Mease
95
269
0
28 Apr 2015
Scalable Kernel Methods via Doubly Stochastic Gradients
Scalable Kernel Methods via Doubly Stochastic Gradients
Bo Dai
Bo Xie
Niao He
Yingyu Liang
Anant Raj
Maria-Florina Balcan
Le Song
146
230
0
21 Jul 2014
Early stopping and non-parametric regression: An optimal data-dependent
  stopping rule
Early stopping and non-parametric regression: An optimal data-dependent stopping rule
Garvesh Raskutti
Martin J. Wainwright
Bin Yu
119
299
0
15 Jun 2013
Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with
  Minimax Optimal Rates
Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
Yuchen Zhang
John C. Duchi
Martin J. Wainwright
325
379
0
22 May 2013
Optimal learning rates for Kernel Conjugate Gradient regression
Optimal learning rates for Kernel Conjugate Gradient regression
Gilles Blanchard
Nicole Krämer
94
71
0
29 Sep 2010
1