ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.05975
  4. Cited By
Outlier detection in regression: conic quadratic formulations

Outlier detection in regression: conic quadratic formulations

12 July 2023
A. Gómez
J. Neto
ArXiv (abs)PDFHTML

Papers citing "Outlier detection in regression: conic quadratic formulations"

21 / 21 papers shown
Title
L0Learn: A Scalable Package for Sparse Learning using L0 Regularization
L0Learn: A Scalable Package for Sparse Learning using L0 Regularization
Hussein Hazimeh
Rahul Mazumder
Tim Nonet
53
16
0
10 Feb 2022
P-split formulations: A class of intermediate formulations between big-M and convex hull for disjunctive constraints
P-split formulations: A class of intermediate formulations between big-M and convex hull for disjunctive constraints
Jan Kronqvist
Ruth Misener
Calvin Tsay
75
7
0
10 Feb 2022
On the convex hull of convex quadratic optimization problems with
  indicators
On the convex hull of convex quadratic optimization problems with indicators
Linchuan Wei
Alper Atamtürk
Andrés Gómez
Simge Küçükyavuz
57
18
0
02 Jan 2022
Linear regression with partially mismatched data: local search with
  theoretical guarantees
Linear regression with partially mismatched data: local search with theoretical guarantees
Rahul Mazumder
Haoyue Wang
53
6
0
03 Jun 2021
Simultaneous Feature Selection and Outlier Detection with Optimality
  Guarantees
Simultaneous Feature Selection and Outlier Detection with Optimality Guarantees
Luca Insolia
Ana M. Kenney
Francesca Chiaromonte
G. Felici
44
20
0
12 Jul 2020
Safe Screening Rules for $\ell_0$-Regression
Safe Screening Rules for ℓ0\ell_0ℓ0​-Regression
Alper Atamturk
A. Gómez
66
23
0
19 Apr 2020
Sparse Regression at Scale: Branch-and-Bound rooted in First-Order
  Optimization
Sparse Regression at Scale: Branch-and-Bound rooted in First-Order Optimization
Hussein Hazimeh
Rahul Mazumder
A. Saab
218
91
0
13 Apr 2020
Integer Programming for Learning Directed Acyclic Graphs from Continuous
  Data
Integer Programming for Learning Directed Acyclic Graphs from Continuous Data
Hasan Manzour
Simge Küçükyavuz
Ali Shojaie
CML
53
38
0
23 Apr 2019
Iterative Least Trimmed Squares for Mixed Linear Regression
Iterative Least Trimmed Squares for Mixed Linear Regression
Yanyao Shen
Sujay Sanghavi
62
25
0
10 Feb 2019
Rank-one Convexification for Sparse Regression
Rank-one Convexification for Sparse Regression
Alper Atamtürk
A. Gómez
196
50
0
29 Jan 2019
Sparse and Smooth Signal Estimation: Convexification of L0 Formulations
Sparse and Smooth Signal Estimation: Convexification of L0 Formulations
Alper Atamtürk
A. Gómez
Shaoning Han
77
44
0
06 Nov 2018
Fast Best Subset Selection: Coordinate Descent and Local Combinatorial
  Optimization Algorithms
Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms
Hussein Hazimeh
Rahul Mazumder
409
183
0
05 Mar 2018
Sparse High-Dimensional Regression: Exact Scalable Algorithms and Phase
  Transitions
Sparse High-Dimensional Regression: Exact Scalable Algorithms and Phase Transitions
Dimitris Bertsimas
Bart P. G. Van Parys
165
158
0
28 Sep 2017
Subset Selection with Shrinkage: Sparse Linear Modeling when the SNR is
  low
Subset Selection with Shrinkage: Sparse Linear Modeling when the SNR is low
Rahul Mazumder
P. Radchenko
Antoine Dedieu
396
59
0
10 Aug 2017
The ALAMO approach to machine learning
The ALAMO approach to machine learning
Zachary T. Wilson
N. Sahinidis
34
161
0
31 May 2017
Subset Selection for Multiple Linear Regression via Optimization
Subset Selection for Multiple Linear Regression via Optimization
Young Woong Park
Diego Klabjan
134
35
0
27 Jan 2017
Minimization of Akaike's Information Criterion in Linear Regression
  Analysis via Mixed Integer Nonlinear Program
Minimization of Akaike's Information Criterion in Linear Regression Analysis via Mixed Integer Nonlinear Program
K. Kimura
Hayato Waki
28
29
0
16 Jun 2016
Regularization vs. Relaxation: A conic optimization perspective of
  statistical variable selection
Regularization vs. Relaxation: A conic optimization perspective of statistical variable selection
Hongbo Dong
Kun Chen
Jeff T. Linderoth
169
59
0
20 Oct 2015
Best Subset Selection via a Modern Optimization Lens
Best Subset Selection via a Modern Optimization Lens
Dimitris Bertsimas
Angela King
Rahul Mazumder
451
664
0
11 Jul 2015
Robust Regression via Hard Thresholding
Robust Regression via Hard Thresholding
Kush S. Bhatia
Prateek Jain
Purushottam Kar
AAMLOOD
52
157
0
08 Jun 2015
Least quantile regression via modern optimization
Least quantile regression via modern optimization
Dimitris Bertsimas
Rahul Mazumder
134
65
0
31 Oct 2013
1