ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.05021
  4. Cited By
The good, the bad and the ugly sides of data augmentation: An implicit
  spectral regularization perspective

The good, the bad and the ugly sides of data augmentation: An implicit spectral regularization perspective

10 October 2022
Chi-Heng Lin
Chiraag Kaushik
Eva L. Dyer
Vidya Muthukumar
ArXivPDFHTML

Papers citing "The good, the bad and the ugly sides of data augmentation: An implicit spectral regularization perspective"

50 / 52 papers shown
Title
Mitigating multiple descents: A model-agnostic framework for risk
  monotonization
Mitigating multiple descents: A model-agnostic framework for risk monotonization
Pratik V. Patil
Arun K. Kuchibhotla
Yuting Wei
Alessandro Rinaldo
80
8
0
25 May 2022
Masked Siamese Networks for Label-Efficient Learning
Masked Siamese Networks for Label-Efficient Learning
Mahmoud Assran
Mathilde Caron
Ishan Misra
Piotr Bojanowski
Florian Bordes
Pascal Vincent
Armand Joulin
Michael G. Rabbat
Nicolas Ballas
SSL
91
320
0
14 Apr 2022
Data Augmentation as Feature Manipulation
Data Augmentation as Feature Manipulation
Ruoqi Shen
Sébastien Bubeck
Suriya Gunasekar
MLT
40
16
0
03 Mar 2022
Boosting Robustness of Image Matting with Context Assembling and Strong
  Data Augmentation
Boosting Robustness of Image Matting with Context Assembling and Strong Data Augmentation
Yutong Dai
Brian L. Price
He Zhang
Chunhua Shen
77
29
0
18 Jan 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
454
7,739
0
11 Nov 2021
Harmless interpolation in regression and classification with structured
  features
Harmless interpolation in regression and classification with structured features
Andrew D. McRae
Santhosh Karnik
Mark A. Davenport
Vidya Muthukumar
135
11
0
09 Nov 2021
Drop, Swap, and Generate: A Self-Supervised Approach for Generating
  Neural Activity
Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity
Ran Liu
Mehdi Azabou
M. Dabagia
Chi-Heng Lin
M. G. Azar
Keith B. Hengen
Michal Valko
Eva L. Dyer
OCL
SSL
DRL
31
36
0
03 Nov 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
77
72
0
06 Sep 2021
Comparing Classes of Estimators: When does Gradient Descent Beat Ridge
  Regression in Linear Models?
Comparing Classes of Estimators: When does Gradient Descent Beat Ridge Regression in Linear Models?
Dominic Richards
Yan Sun
Patrick Rebeschini
35
3
0
26 Aug 2021
A Survey of Data Augmentation Approaches for NLP
A Survey of Data Augmentation Approaches for NLP
Steven Y. Feng
Varun Gangal
Jason W. Wei
Sarath Chandar
Soroush Vosoughi
Teruko Mitamura
Eduard H. Hovy
AIMat
98
823
0
07 May 2021
Risk Bounds for Over-parameterized Maximum Margin Classification on
  Sub-Gaussian Mixtures
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures
Yuan Cao
Quanquan Gu
M. Belkin
40
52
0
28 Apr 2021
A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
  Data Augmentation
A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation
Jianlong Yuan
Yifan Liu
Chunhua Shen
Zhibin Wang
Hao Li
45
112
0
15 Apr 2021
How rotational invariance of common kernels prevents generalization in
  high dimensions
How rotational invariance of common kernels prevents generalization in high dimensions
Konstantin Donhauser
Mingqi Wu
Fanny Yang
71
24
0
09 Apr 2021
Benign Overfitting of Constant-Stepsize SGD for Linear Regression
Benign Overfitting of Constant-Stepsize SGD for Linear Regression
Difan Zou
Jingfeng Wu
Vladimir Braverman
Quanquan Gu
Sham Kakade
44
63
0
23 Mar 2021
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Jure Zbontar
Li Jing
Ishan Misra
Yann LeCun
Stéphane Deny
SSL
300
2,344
0
04 Mar 2021
Learning with invariances in random features and kernel models
Learning with invariances in random features and kernel models
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
94
90
0
25 Feb 2021
Mine Your Own vieW: Self-Supervised Learning Through Across-Sample
  Prediction
Mine Your Own vieW: Self-Supervised Learning Through Across-Sample Prediction
Mehdi Azabou
M. G. Azar
Ran Liu
Chi-Heng Lin
Erik C. Johnson
...
Lindsey Kitchell
Keith B. Hengen
William R. Gray Roncal
Michal Valko
Eva L. Dyer
AI4TS
63
57
0
19 Feb 2021
Negative Data Augmentation
Negative Data Augmentation
Abhishek Sinha
Kumar Ayush
Jiaming Song
Burak Uzkent
Hongxia Jin
Stefano Ermon
72
74
0
09 Feb 2021
Towards Understanding Ensemble, Knowledge Distillation and
  Self-Distillation in Deep Learning
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
120
372
0
17 Dec 2020
Binary Classification of Gaussian Mixtures: Abundance of Support
  Vectors, Benign Overfitting and Regularization
Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting and Regularization
Ke Wang
Christos Thrampoulidis
57
28
0
18 Nov 2020
How Data Augmentation affects Optimization for Linear Regression
How Data Augmentation affects Optimization for Linear Regression
Boris Hanin
Yi Sun
60
16
0
21 Oct 2020
Benign overfitting in ridge regression
Benign overfitting in ridge regression
Alexander Tsigler
Peter L. Bartlett
66
167
0
29 Sep 2020
Bootstrap your own latent: A new approach to self-supervised Learning
Bootstrap your own latent: A new approach to self-supervised Learning
Jean-Bastien Grill
Florian Strub
Florent Altché
Corentin Tallec
Pierre Harvey Richemond
...
M. G. Azar
Bilal Piot
Koray Kavukcuoglu
Rémi Munos
Michal Valko
SSL
363
6,797
0
13 Jun 2020
Evaluation of Neural Architectures Trained with Square Loss vs
  Cross-Entropy in Classification Tasks
Evaluation of Neural Architectures Trained with Square Loss vs Cross-Entropy in Classification Tasks
Like Hui
M. Belkin
UQCV
AAML
VLM
48
171
0
12 Jun 2020
On the Optimal Weighted $\ell_2$ Regularization in Overparameterized
  Linear Regression
On the Optimal Weighted ℓ2\ell_2ℓ2​ Regularization in Overparameterized Linear Regression
Denny Wu
Ji Xu
65
122
0
10 Jun 2020
Classification vs regression in overparameterized regimes: Does the loss
  function matter?
Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
81
151
0
16 May 2020
On the Generalization Effects of Linear Transformations in Data
  Augmentation
On the Generalization Effects of Linear Transformations in Data Augmentation
Sen Wu
Hongyang R. Zhang
Gregory Valiant
Christopher Ré
54
77
0
02 May 2020
Finite-sample Analysis of Interpolating Linear Classifiers in the
  Overparameterized Regime
Finite-sample Analysis of Interpolating Linear Classifiers in the Overparameterized Regime
Niladri S. Chatterji
Philip M. Long
41
108
0
25 Apr 2020
Optimal Regularization Can Mitigate Double Descent
Optimal Regularization Can Mitigate Double Descent
Preetum Nakkiran
Prayaag Venkat
Sham Kakade
Tengyu Ma
81
133
0
04 Mar 2020
Time Series Data Augmentation for Deep Learning: A Survey
Time Series Data Augmentation for Deep Learning: A Survey
Qingsong Wen
Liang Sun
Fan Yang
Xiaomin Song
Jing Gao
Xue Wang
Huan Xu
AI4TS
58
642
0
27 Feb 2020
A Simple Framework for Contrastive Learning of Visual Representations
A Simple Framework for Contrastive Learning of Visual Representations
Ting-Li Chen
Simon Kornblith
Mohammad Norouzi
Geoffrey E. Hinton
SSL
358
18,752
0
13 Feb 2020
A Model of Double Descent for High-dimensional Binary Linear
  Classification
A Model of Double Descent for High-dimensional Binary Linear Classification
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
82
146
0
13 Nov 2019
Enhanced Convolutional Neural Tangent Kernels
Enhanced Convolutional Neural Tangent Kernels
Zhiyuan Li
Ruosong Wang
Dingli Yu
S. Du
Wei Hu
Ruslan Salakhutdinov
Sanjeev Arora
62
132
0
03 Nov 2019
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
74
776
0
26 Jun 2019
Invariance-inducing regularization using worst-case transformations
  suffices to boost accuracy and spatial robustness
Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness
Fanny Yang
Zuowen Wang
C. Heinze-Deml
100
42
0
26 Jun 2019
Harmless interpolation of noisy data in regression
Harmless interpolation of noisy data in regression
Vidya Muthukumar
Kailas Vodrahalli
Vignesh Subramanian
A. Sahai
74
202
0
21 Mar 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
186
743
0
19 Mar 2019
Two models of double descent for weak features
Two models of double descent for weak features
M. Belkin
Daniel J. Hsu
Ji Xu
90
374
0
18 Mar 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
227
1,647
0
28 Dec 2018
Deep Knockoffs
Deep Knockoffs
Yaniv Romano
Matteo Sesia
Emmanuel J. Candès
BDL
65
140
0
16 Nov 2018
On the Implicit Bias of Dropout
On the Implicit Bias of Dropout
Poorya Mianjy
R. Arora
René Vidal
52
67
0
26 Jun 2018
Optimal ridge penalty for real-world high-dimensional data can be zero
  or negative due to the implicit ridge regularization
Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization
D. Kobak
Jonathan Lomond
Benoit Sanchez
52
89
0
28 May 2018
A Kernel Theory of Modern Data Augmentation
A Kernel Theory of Modern Data Augmentation
Tri Dao
Albert Gu
Alexander J. Ratner
Virginia Smith
Christopher De Sa
Christopher Ré
100
193
0
16 Mar 2018
mixup: Beyond Empirical Risk Minimization
mixup: Beyond Empirical Risk Minimization
Hongyi Zhang
Moustapha Cissé
Yann N. Dauphin
David Lopez-Paz
NoLa
278
9,760
0
25 Oct 2017
Dropout as a Low-Rank Regularizer for Matrix Factorization
Dropout as a Low-Rank Regularizer for Matrix Factorization
Jacopo Cavazza
Pietro Morerio
B. Haeffele
Connor Lane
Vittorio Murino
René Vidal
106
42
0
13 Oct 2017
Learning to Compose Domain-Specific Transformations for Data
  Augmentation
Learning to Compose Domain-Specific Transformations for Data Augmentation
Alexander J. Ratner
Henry R. Ehrenberg
Zeshan Hussain
Jared A. Dunnmon
Christopher Ré
70
349
0
06 Sep 2017
Improved Regularization of Convolutional Neural Networks with Cutout
Improved Regularization of Convolutional Neural Networks with Cutout
Terrance Devries
Graham W. Taylor
109
3,764
0
15 Aug 2017
Local Group Invariant Representations via Orbit Embeddings
Local Group Invariant Representations via Orbit Embeddings
Anant Raj
Abhishek Kumar
Youssef Mroueh
Tom Fletcher
Bernhard Schölkopf
48
38
0
06 Dec 2016
Group Equivariant Convolutional Networks
Group Equivariant Convolutional Networks
Taco S. Cohen
Max Welling
BDL
167
1,934
0
24 Feb 2016
Dropout as data augmentation
Dropout as data augmentation
Xavier Bouthillier
K. Konda
Pascal Vincent
Roland Memisevic
73
134
0
29 Jun 2015
12
Next