ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1306.0113
  4. Cited By
Trust, but verify: benefits and pitfalls of least-squares refitting in
  high dimensions

Trust, but verify: benefits and pitfalls of least-squares refitting in high dimensions

1 June 2013
Johannes Lederer
ArXivPDFHTML

Papers citing "Trust, but verify: benefits and pitfalls of least-squares refitting in high dimensions"

13 / 13 papers shown
Title
The Group Square-Root Lasso: Theoretical Properties and Fast Algorithms
The Group Square-Root Lasso: Theoretical Properties and Fast Algorithms
F. Bunea
Johannes Lederer
Yiyuan She
131
110
0
01 Feb 2013
How Correlations Influence Lasso Prediction
How Correlations Influence Lasso Prediction
Mohamed Hebiri
Johannes Lederer
96
102
0
07 Apr 2012
Scaled Sparse Linear Regression
Scaled Sparse Linear Regression
Tingni Sun
Cun-Hui Zhang
149
507
0
24 Apr 2011
Nuclear norm penalization and optimal rates for noisy low rank matrix
  completion
Nuclear norm penalization and optimal rates for noisy low rank matrix completion
V. Koltchinskii
Alexandre B. Tsybakov
Karim Lounici
179
663
0
29 Nov 2010
Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic
  Programming
Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming
A. Belloni
Victor Chernozhukov
Lie Wang
132
672
0
28 Sep 2010
Exponential Screening and optimal rates of sparse estimation
Exponential Screening and optimal rates of sparse estimation
Philippe Rigollet
Alexandre B. Tsybakov
164
242
0
12 Mar 2010
Least squares after model selection in high-dimensional sparse models
Least squares after model selection in high-dimensional sparse models
A. Belloni
Victor Chernozhukov
225
222
0
31 Dec 2009
On the conditions used to prove oracle results for the Lasso
On the conditions used to prove oracle results for the Lasso
Sara van de Geer
Peter Buhlmann
230
729
0
05 Oct 2009
Honest variable selection in linear and logistic regression models via
  $\ell_1$ and $\ell_1+\ell_2$ penalization
Honest variable selection in linear and logistic regression models via ℓ1\ell_1ℓ1​ and ℓ1+ℓ2\ell_1+\ell_2ℓ1​+ℓ2​ penalization
F. Bunea
206
147
0
29 Aug 2008
Lasso-type recovery of sparse representations for high-dimensional data
Lasso-type recovery of sparse representations for high-dimensional data
N. Meinshausen
Bin Yu
297
879
0
01 Jun 2008
Sup-norm convergence rate and sign concentration property of Lasso and
  Dantzig estimators
Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
Karim Lounici
288
241
0
30 Jan 2008
Simultaneous analysis of Lasso and Dantzig selector
Simultaneous analysis of Lasso and Dantzig selector
Peter J. Bickel
Yaácov Ritov
Alexandre B. Tsybakov
388
2,527
0
07 Jan 2008
Enhancing Sparsity by Reweighted L1 Minimization
Enhancing Sparsity by Reweighted L1 Minimization
Emmanuel J. Candes
M. Wakin
Stephen P. Boyd
179
5,027
0
10 Nov 2007
1