ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1506.02222
  4. Cited By
No penalty no tears: Least squares in high-dimensional linear models

No penalty no tears: Least squares in high-dimensional linear models

7 June 2015
Xiangyu Wang
David B. Dunson
Chenlei Leng
ArXivPDFHTML

Papers citing "No penalty no tears: Least squares in high-dimensional linear models"

4 / 4 papers shown
Title
ExDAG: Exact learning of DAGs
ExDAG: Exact learning of DAGs
Pavel Rytír
Ales Wodecki
Jakub Marecek
CML
48
1
0
21 Jun 2024
Efficient and Scalable Structure Learning for Bayesian Networks:
  Algorithms and Applications
Efficient and Scalable Structure Learning for Bayesian Networks: Algorithms and Applications
Rong Zhu
A. Pfadler
Ziniu Wu
Yuxing Han
Xiaoke Yang
Feng Ye
Zhenping Qian
Jingren Zhou
Bin Cui
18
9
0
07 Dec 2020
Optimal Two-Step Prediction in Regression
Optimal Two-Step Prediction in Regression
Didier Chételat
Johannes Lederer
Joseph Salmon
40
19
0
18 Oct 2014
Thresholded Lasso for high dimensional variable selection and
  statistical estimation
Thresholded Lasso for high dimensional variable selection and statistical estimation
Shuheng Zhou
112
50
0
08 Feb 2010
1