ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1305.4825
  4. Cited By
Learning subgaussian classes : Upper and minimax bounds

Learning subgaussian classes : Upper and minimax bounds

21 May 2013
Guillaume Lecué
S. Mendelson
ArXivPDFHTML

Papers citing "Learning subgaussian classes : Upper and minimax bounds"

23 / 23 papers shown
Title
Sharp Rates in Dependent Learning Theory: Avoiding Sample Size Deflation for the Square Loss
Sharp Rates in Dependent Learning Theory: Avoiding Sample Size Deflation for the Square Loss
Ingvar M. Ziemann
Stephen Tu
George J. Pappas
Nikolai Matni
54
8
0
08 Feb 2024
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized
  Linear Models
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models
Lijia Zhou
Frederic Koehler
Pragya Sur
Danica J. Sutherland
Nathan Srebro
88
9
0
21 Oct 2022
Learning with little mixing
Learning with little mixing
Ingvar M. Ziemann
Stephen Tu
34
27
0
16 Jun 2022
Exponential Tail Local Rademacher Complexity Risk Bounds Without the
  Bernstein Condition
Exponential Tail Local Rademacher Complexity Risk Bounds Without the Bernstein Condition
Varun Kanade
Patrick Rebeschini
Tomas Vaskevicius
39
10
0
23 Feb 2022
Non-Asymptotic Guarantees for Robust Statistical Learning under Infinite
  Variance Assumption
Non-Asymptotic Guarantees for Robust Statistical Learning under Infinite Variance Assumption
Lihu Xu
Fang Yao
Qiuran Yao
Huiming Zhang
38
10
0
10 Jan 2022
A spectral algorithm for robust regression with subgaussian rates
A spectral algorithm for robust regression with subgaussian rates
Jules Depersin
23
14
0
12 Jul 2020
Empirical Risk Minimization under Random Censorship: Theory and Practice
Empirical Risk Minimization under Random Censorship: Theory and Practice
Guillaume Ausset
Stéphan Clémençon
François Portier
20
5
0
05 Jun 2019
Robust high dimensional learning for Lipschitz and convex losses
Robust high dimensional learning for Lipschitz and convex losses
Geoffrey Chinot
Guillaume Lecué
M. Lerasle
29
18
0
10 May 2019
Robust classification via MOM minimization
Robust classification via MOM minimization
Guillaume Lecué
M. Lerasle
Timlothée Mathieu
16
47
0
09 Aug 2018
Localized Gaussian width of $M$-convex hulls with applications to Lasso
  and convex aggregation
Localized Gaussian width of MMM-convex hulls with applications to Lasso and convex aggregation
Pierre C. Bellec
21
17
0
30 May 2017
Estimation bounds and sharp oracle inequalities of regularized
  procedures with Lipschitz loss functions
Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
Pierre Alquier
V. Cottet
Guillaume Lecué
40
58
0
05 Feb 2017
Learning from MOM's principles: Le Cam's approach
Learning from MOM's principles: Le Cam's approach
Lecué Guillaume
Lerasle Matthieu
38
51
0
08 Jan 2017
Risk minimization by median-of-means tournaments
Risk minimization by median-of-means tournaments
Gabor Lugosi
S. Mendelson
16
130
0
02 Aug 2016
On optimality of empirical risk minimization in linear aggregation
On optimality of empirical risk minimization in linear aggregation
Adrien Saumard
28
20
0
11 May 2016
High-Dimensional Estimation of Structured Signals from Non-Linear
  Observations with General Convex Loss Functions
High-Dimensional Estimation of Structured Signals from Non-Linear Observations with General Convex Loss Functions
Martin Genzel
30
45
0
10 Feb 2016
`local' vs. `global' parameters -- breaking the gaussian complexity
  barrier
`local' vs. `global' parameters -- breaking the gaussian complexity barrier
S. Mendelson
35
24
0
09 Apr 2015
On aggregation for heavy-tailed classes
On aggregation for heavy-tailed classes
S. Mendelson
49
28
0
25 Feb 2015
The generalized Lasso with non-linear observations
The generalized Lasso with non-linear observations
Y. Plan
Roman Vershynin
41
195
0
13 Feb 2015
Learning without Concentration for General Loss Functions
Learning without Concentration for General Loss Functions
S. Mendelson
63
65
0
13 Oct 2014
Geometric Inference for General High-Dimensional Linear Inverse Problems
Geometric Inference for General High-Dimensional Linear Inverse Problems
T. Tony Cai
Tengyuan Liang
Alexander Rakhlin
54
27
0
17 Apr 2014
Performance of empirical risk minimization in linear aggregation
Performance of empirical risk minimization in linear aggregation
Guillaume Lecué
S. Mendelson
FedML
55
40
0
24 Feb 2014
Learning without Concentration
Learning without Concentration
S. Mendelson
92
333
0
01 Jan 2014
Concentration in unbounded metric spaces and algorithmic stability
Concentration in unbounded metric spaces and algorithmic stability
A. Kontorovich
35
53
0
04 Sep 2013
1