ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.11544
  4. Cited By
The role of regularization in classification of high-dimensional noisy
  Gaussian mixture

The role of regularization in classification of high-dimensional noisy Gaussian mixture

26 February 2020
Francesca Mignacco
Florent Krzakala
Yue M. Lu
Lenka Zdeborová
ArXivPDFHTML

Papers citing "The role of regularization in classification of high-dimensional noisy Gaussian mixture"

23 / 23 papers shown
Title
The Double Descent Behavior in Two Layer Neural Network for Binary Classification
The Double Descent Behavior in Two Layer Neural Network for Binary Classification
Chathurika S Abeykoon
A. Beknazaryan
Hailin Sang
92
1
0
27 Apr 2025
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
Kaito Takanami
Takashi Takahashi
Ayaka Sakata
78
1
0
27 Jan 2025
Class Imbalance in Anomaly Detection: Learning from an Exactly Solvable Model
Class Imbalance in Anomaly Detection: Learning from an Exactly Solvable Model
F.S. Pezzicoli
V. Ros
F.P. Landes
M. Baity-Jesi
55
2
0
20 Jan 2025
Analysis of High-dimensional Gaussian Labeled-unlabeled Mixture Model via Message-passing Algorithm
Analysis of High-dimensional Gaussian Labeled-unlabeled Mixture Model via Message-passing Algorithm
Xiaosi Gu
Tomoyuki Obuchi
98
0
0
29 Nov 2024
When resampling/reweighting improves feature learning in imbalanced classification?: A toy-model study
When resampling/reweighting improves feature learning in imbalanced classification?: A toy-model study
Tomoyuki Obuchi
Toshiyuki Tanaka
82
0
0
09 Sep 2024
Restoring balance: principled under/oversampling of data for optimal classification
Restoring balance: principled under/oversampling of data for optimal classification
Emanuele Loffredo
Mauro Pastore
Simona Cocco
R. Monasson
61
9
0
15 May 2024
Analytic Study of Double Descent in Binary Classification: The Impact of
  Loss
Analytic Study of Double Descent in Binary Classification: The Impact of Loss
Ganesh Ramachandra Kini
Christos Thrampoulidis
52
52
0
30 Jan 2020
Deep Double Descent: Where Bigger Models and More Data Hurt
Deep Double Descent: Where Bigger Models and More Data Hurt
Preetum Nakkiran
Gal Kaplun
Yamini Bansal
Tristan Yang
Boaz Barak
Ilya Sutskever
105
925
0
04 Dec 2019
A Model of Double Descent for High-dimensional Binary Linear
  Classification
A Model of Double Descent for High-dimensional Binary Linear Classification
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
68
145
0
13 Nov 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
68
631
0
14 Aug 2019
Asymptotic Bayes risk for Gaussian mixture in a semi-supervised setting
Asymptotic Bayes risk for Gaussian mixture in a semi-supervised setting
Marc Lelarge
Léo Miolane
36
28
0
08 Jul 2019
The Impact of Regularization on High-dimensional Logistic Regression
The Impact of Regularization on High-dimensional Logistic Regression
Fariborz Salehi
Ehsan Abbasi
B. Hassibi
91
103
0
10 Jun 2019
Understanding overfitting peaks in generalization error: Analytical risk
  curves for $l_2$ and $l_1$ penalized interpolation
Understanding overfitting peaks in generalization error: Analytical risk curves for l2l_2l2​ and l1l_1l1​ penalized interpolation
P. Mitra
35
50
0
09 Jun 2019
High Dimensional Classification via Regularized and Unregularized
  Empirical Risk Minimization: Precise Error and Optimal Loss
High Dimensional Classification via Regularized and Unregularized Empirical Risk Minimization: Precise Error and Optimal Loss
Xiaoyi Mai
Zhenyu Liao
15
16
0
31 May 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
124
737
0
19 Mar 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
172
1,628
0
28 Dec 2018
The jamming transition as a paradigm to understand the loss landscape of
  deep neural networks
The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Mario Geiger
S. Spigler
Stéphane dÁscoli
Levent Sagun
Marco Baity-Jesi
Giulio Biroli
Matthieu Wyart
46
141
0
25 Sep 2018
The phase transition for the existence of the maximum likelihood
  estimate in high-dimensional logistic regression
The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression
Emmanuel J. Candes
Pragya Sur
34
140
0
25 Apr 2018
A modern maximum-likelihood theory for high-dimensional logistic
  regression
A modern maximum-likelihood theory for high-dimensional logistic regression
Pragya Sur
Emmanuel J. Candes
47
287
0
19 Mar 2018
The Implicit Bias of Gradient Descent on Separable Data
The Implicit Bias of Gradient Descent on Separable Data
Daniel Soudry
Elad Hoffer
Mor Shpigel Nacson
Suriya Gunasekar
Nathan Srebro
66
908
0
27 Oct 2017
Phase transitions and optimal algorithms in high-dimensional Gaussian
  mixture clustering
Phase transitions and optimal algorithms in high-dimensional Gaussian mixture clustering
T. Lesieur
Caterina De Bacco
Jessica E. Banks
Florent Krzakala
Cristopher Moore
Lenka Zdeborová
38
40
0
10 Oct 2016
High-Dimensional Asymptotics of Prediction: Ridge Regression and
  Classification
High-Dimensional Asymptotics of Prediction: Ridge Regression and Classification
Yan Sun
Stefan Wager
50
288
0
10 Jul 2015
Message Passing Algorithms for Compressed Sensing
Message Passing Algorithms for Compressed Sensing
D. Donoho
A. Maleki
Andrea Montanari
96
2,352
0
21 Jul 2009
1