Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.04708
Cited By
A New Look at an Old Problem: A Universal Learning Approach to Linear Regression
12 May 2019
Koby Bibas
Yaniv Fogel
M. Feder
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A New Look at an Old Problem: A Universal Learning Approach to Linear Regression"
23 / 23 papers shown
Title
High Dimensional Binary Classification under Label Shift: Phase Transition and Regularization
Jiahui Cheng
Minshuo Chen
Hao Liu
Tuo Zhao
Wenjing Liao
44
0
0
01 Dec 2022
Beyond Ridge Regression for Distribution-Free Data
Koby Bibas
M. Feder
22
0
0
17 Jun 2022
Generalization for multiclass classification with overparameterized linear models
Vignesh Subramanian
Rahul Arya
A. Sahai
AI4CE
37
9
0
03 Jun 2022
Bias-variance decomposition of overparameterized regression with random linear features
J. Rocks
Pankaj Mehta
22
12
0
10 Mar 2022
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Ananya Kumar
Aditi Raghunathan
Robbie Jones
Tengyu Ma
Percy Liang
OODD
50
646
0
21 Feb 2022
Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection
Koby Bibas
M. Feder
Tal Hassner
OODD
39
24
0
18 Oct 2021
Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness
Uriya Pesso
Koby Bibas
M. Feder
AAML
24
2
0
04 Sep 2021
Mitigating deep double descent by concatenating inputs
John Chen
Qihan Wang
Anastasios Kyrillidis
BDL
13
3
0
02 Jul 2021
Double Descent Optimization Pattern and Aliasing: Caveats of Noisy Labels
Florian Dubost
Erin Hong
Max Pike
Siddharth Sharma
Siyi Tang
Nandita Bhaskhar
Christopher Lee-Messer
D. Rubin
NoLa
37
0
0
03 Jun 2021
The Geometry of Over-parameterized Regression and Adversarial Perturbations
J. Rocks
Pankaj Mehta
AAML
19
8
0
25 Mar 2021
Distribution Free Uncertainty for the Minimum Norm Solution of Over-parameterized Linear Regression
Koby Bibas
M. Feder
11
5
0
14 Feb 2021
Efficient Data-Dependent Learnability
Yaniv Fogel
T. Shapira
M. Feder
8
0
0
20 Nov 2020
Amortized Conditional Normalized Maximum Likelihood: Reliable Out of Distribution Uncertainty Estimation
Aurick Zhou
Sergey Levine
BDL
OOD
UQCV
6
13
0
05 Nov 2020
Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized models
J. Rocks
Pankaj Mehta
25
41
0
26 Oct 2020
Increasing Depth Leads to U-Shaped Test Risk in Over-parameterized Convolutional Networks
Eshaan Nichani
Adityanarayanan Radhakrishnan
Caroline Uhler
34
9
0
19 Oct 2020
Learning, compression, and leakage: Minimising classification error via meta-universal compression principles
F. Rosas
P. Mediano
Michael C. Gastpar
10
10
0
14 Oct 2020
Benign overfitting in ridge regression
Alexander Tsigler
Peter L. Bartlett
33
160
0
29 Sep 2020
Optimal Regularization Can Mitigate Double Descent
Preetum Nakkiran
Prayaag Venkat
Sham Kakade
Tengyu Ma
31
128
0
04 Mar 2020
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
Preetum Nakkiran
25
68
0
16 Dec 2019
Deep Double Descent: Where Bigger Models and More Data Hurt
Preetum Nakkiran
Gal Kaplun
Yamini Bansal
Tristan Yang
Boaz Barak
Ilya Sutskever
46
916
0
04 Dec 2019
Deep pNML: Predictive Normalized Maximum Likelihood for Deep Neural Networks
Koby Bibas
Yaniv Fogel
M. Feder
BDL
27
19
0
28 Apr 2019
Universal Supervised Learning for Individual Data
Yaniv Fogel
M. Feder
FedML
SSL
14
10
0
22 Dec 2018
Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization
D. Kobak
Jonathan Lomond
Benoit Sanchez
35
89
0
28 May 2018
1