ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.08246
  4. Cited By
Characterizing Implicit Bias in Terms of Optimization Geometry

Characterizing Implicit Bias in Terms of Optimization Geometry

22 February 2018
Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
    AI4CE
ArXivPDFHTML

Papers citing "Characterizing Implicit Bias in Terms of Optimization Geometry"

50 / 110 papers shown
Title
Implicit Bias of Linear Equivariant Networks
Implicit Bias of Linear Equivariant Networks
Hannah Lawrence
Kristian Georgiev
A. Dienes
B. Kiani
AI4CE
45
14
0
12 Oct 2021
On the Self-Penalization Phenomenon in Feature Selection
On the Self-Penalization Phenomenon in Feature Selection
Michael I. Jordan
Keli Liu
Feng Ruan
26
4
0
12 Oct 2021
On Margin Maximization in Linear and ReLU Networks
On Margin Maximization in Linear and ReLU Networks
Gal Vardi
Ohad Shamir
Nathan Srebro
52
28
0
06 Oct 2021
Spectral Bias in Practice: The Role of Function Frequency in
  Generalization
Spectral Bias in Practice: The Role of Function Frequency in Generalization
Sara Fridovich-Keil
Raphael Gontijo-Lopes
Rebecca Roelofs
41
28
0
06 Oct 2021
Federated Asymptotics: a model to compare federated learning algorithms
Federated Asymptotics: a model to compare federated learning algorithms
Gary Cheng
Karan N. Chadha
John C. Duchi
FedML
16
16
0
16 Aug 2021
SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs
SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs
Satyen Kale
Ayush Sekhari
Karthik Sridharan
196
29
0
11 Jul 2021
A Theoretical Analysis of Fine-tuning with Linear Teachers
A Theoretical Analysis of Fine-tuning with Linear Teachers
Gal Shachaf
Alon Brutzkus
Amir Globerson
34
17
0
04 Jul 2021
What can linearized neural networks actually say about generalization?
What can linearized neural networks actually say about generalization?
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
29
44
0
12 Jun 2021
From inexact optimization to learning via gradient concentration
From inexact optimization to learning via gradient concentration
Bernhard Stankewitz
Nicole Mücke
Lorenzo Rosasco
26
5
0
09 Jun 2021
A Geometric Analysis of Neural Collapse with Unconstrained Features
A Geometric Analysis of Neural Collapse with Unconstrained Features
Zhihui Zhu
Tianyu Ding
Jinxin Zhou
Xiao Li
Chong You
Jeremias Sulam
Qing Qu
40
196
0
06 May 2021
Risk Bounds for Over-parameterized Maximum Margin Classification on
  Sub-Gaussian Mixtures
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures
Yuan Cao
Quanquan Gu
M. Belkin
11
51
0
28 Apr 2021
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal
  Mirror Descent
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent
Shahar Azulay
E. Moroshko
Mor Shpigel Nacson
Blake E. Woodworth
Nathan Srebro
Amir Globerson
Daniel Soudry
AI4CE
35
73
0
19 Feb 2021
Dissecting Supervised Contrastive Learning
Dissecting Supervised Contrastive Learning
Florian Graf
Christoph Hofer
Marc Niethammer
Roland Kwitt
SSL
117
70
0
17 Feb 2021
SGD Generalizes Better Than GD (And Regularization Doesn't Help)
SGD Generalizes Better Than GD (And Regularization Doesn't Help)
I Zaghloul Amir
Tomer Koren
Roi Livni
29
46
0
01 Feb 2021
NeurIPS 2020 Competition: Predicting Generalization in Deep Learning
NeurIPS 2020 Competition: Predicting Generalization in Deep Learning
Yiding Jiang
Pierre Foret
Scott Yak
Daniel M. Roy
H. Mobahi
Gintare Karolina Dziugaite
Samy Bengio
Suriya Gunasekar
Isabelle M Guyon
Behnam Neyshabur Google Research
OOD
24
55
0
14 Dec 2020
The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous
  Neural Networks
The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous Neural Networks
Bohan Wang
Qi Meng
Wei Chen
Tie-Yan Liu
30
33
0
11 Dec 2020
Direction Matters: On the Implicit Bias of Stochastic Gradient Descent
  with Moderate Learning Rate
Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate
Jingfeng Wu
Difan Zou
Vladimir Braverman
Quanquan Gu
13
18
0
04 Nov 2020
The Deep Bootstrap Framework: Good Online Learners are Good Offline
  Generalizers
The Deep Bootstrap Framework: Good Online Learners are Good Offline Generalizers
Preetum Nakkiran
Behnam Neyshabur
Hanie Sedghi
OffRL
29
11
0
16 Oct 2020
A Unifying View on Implicit Bias in Training Linear Neural Networks
A Unifying View on Implicit Bias in Training Linear Neural Networks
Chulhee Yun
Shankar Krishnan
H. Mobahi
MLT
20
80
0
06 Oct 2020
Obtaining Adjustable Regularization for Free via Iterate Averaging
Obtaining Adjustable Regularization for Free via Iterate Averaging
Jingfeng Wu
Vladimir Braverman
Lin F. Yang
32
2
0
15 Aug 2020
Implicit Bias in Deep Linear Classification: Initialization Scale vs
  Training Accuracy
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy
E. Moroshko
Suriya Gunasekar
Blake E. Woodworth
Jason D. Lee
Nathan Srebro
Daniel Soudry
35
85
0
13 Jul 2020
Statistical and Algorithmic Insights for Semi-supervised Learning with
  Self-training
Statistical and Algorithmic Insights for Semi-supervised Learning with Self-training
Samet Oymak
Talha Cihad Gulcu
24
20
0
19 Jun 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
36
32
0
18 Jun 2020
Neural Anisotropy Directions
Neural Anisotropy Directions
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
34
16
0
17 Jun 2020
Directional Pruning of Deep Neural Networks
Directional Pruning of Deep Neural Networks
Shih-Kang Chao
Zhanyu Wang
Yue Xing
Guang Cheng
ODL
21
33
0
16 Jun 2020
Shape Matters: Understanding the Implicit Bias of the Noise Covariance
Shape Matters: Understanding the Implicit Bias of the Noise Covariance
Jeff Z. HaoChen
Colin Wei
Jason D. Lee
Tengyu Ma
32
94
0
15 Jun 2020
To Each Optimizer a Norm, To Each Norm its Generalization
To Each Optimizer a Norm, To Each Norm its Generalization
Sharan Vaswani
Reza Babanezhad
Jose Gallego
Aaron Mishkin
Simon Lacoste-Julien
Nicolas Le Roux
26
8
0
11 Jun 2020
Directional convergence and alignment in deep learning
Directional convergence and alignment in deep learning
Ziwei Ji
Matus Telgarsky
20
164
0
11 Jun 2020
Halting Time is Predictable for Large Models: A Universality Property
  and Average-case Analysis
Halting Time is Predictable for Large Models: A Universality Property and Average-case Analysis
Courtney Paquette
B. V. Merrienboer
Elliot Paquette
Fabian Pedregosa
31
25
0
08 Jun 2020
Classification vs regression in overparameterized regimes: Does the loss
  function matter?
Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
43
149
0
16 May 2020
Learning the gravitational force law and other analytic functions
Learning the gravitational force law and other analytic functions
Atish Agarwala
Abhimanyu Das
Rina Panigrahy
Qiuyi Zhang
MLT
16
0
0
15 May 2020
Implicit Regularization in Deep Learning May Not Be Explainable by Norms
Implicit Regularization in Deep Learning May Not Be Explainable by Norms
Noam Razin
Nadav Cohen
24
155
0
13 May 2020
On Learning Rates and Schrödinger Operators
On Learning Rates and Schrödinger Operators
Bin Shi
Weijie J. Su
Michael I. Jordan
24
60
0
15 Apr 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks
  Trained with the Logistic Loss
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
39
329
0
11 Feb 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and
  Minimum-$\ell_1$-Norm Interpolated Classifiers
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-ℓ1\ell_1ℓ1​-Norm Interpolated Classifiers
Tengyuan Liang
Pragya Sur
37
68
0
05 Feb 2020
Towards Understanding the Spectral Bias of Deep Learning
Towards Understanding the Spectral Bias of Deep Learning
Yuan Cao
Zhiying Fang
Yue Wu
Ding-Xuan Zhou
Quanquan Gu
41
215
0
03 Dec 2019
Beyond Linearization: On Quadratic and Higher-Order Approximation of
  Wide Neural Networks
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
Jason D. Lee
24
116
0
03 Oct 2019
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Kaifeng Lyu
Jian Li
52
324
0
13 Jun 2019
Kernel and Rich Regimes in Overparametrized Models
Blake E. Woodworth
Suriya Gunasekar
Pedro H. P. Savarese
E. Moroshko
Itay Golan
Jason D. Lee
Daniel Soudry
Nathan Srebro
30
353
0
13 Jun 2019
The Implicit Bias of AdaGrad on Separable Data
The Implicit Bias of AdaGrad on Separable Data
Qian Qian
Xiaoyuan Qian
37
23
0
09 Jun 2019
PowerSGD: Practical Low-Rank Gradient Compression for Distributed
  Optimization
PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
Thijs Vogels
Sai Praneeth Karimireddy
Martin Jaggi
19
317
0
31 May 2019
Implicit Regularization in Deep Matrix Factorization
Implicit Regularization in Deep Matrix Factorization
Sanjeev Arora
Nadav Cohen
Wei Hu
Yuping Luo
AI4CE
38
493
0
31 May 2019
Generalization bounds for deep convolutional neural networks
Generalization bounds for deep convolutional neural networks
Philip M. Long
Hanie Sedghi
MLT
42
89
0
29 May 2019
A type of generalization error induced by initialization in deep neural
  networks
A type of generalization error induced by initialization in deep neural networks
Tao Luo
Zhi-Qin John Xu
Yaoyu Zhang
Zheng Ma
9
50
0
19 May 2019
Lexicographic and Depth-Sensitive Margins in Homogeneous and
  Non-Homogeneous Deep Models
Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models
Mor Shpigel Nacson
Suriya Gunasekar
Jason D. Lee
Nathan Srebro
Daniel Soudry
33
92
0
17 May 2019
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz
  Augmentation
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation
Colin Wei
Tengyu Ma
25
109
0
09 May 2019
A Selective Overview of Deep Learning
A Selective Overview of Deep Learning
Jianqing Fan
Cong Ma
Yiqiao Zhong
BDL
VLM
38
136
0
10 Apr 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
R. Tibshirani
31
728
0
19 Mar 2019
An Empirical Study of Large-Batch Stochastic Gradient Descent with
  Structured Covariance Noise
An Empirical Study of Large-Batch Stochastic Gradient Descent with Structured Covariance Noise
Yeming Wen
Kevin Luk
Maxime Gazeau
Guodong Zhang
Harris Chan
Jimmy Ba
ODL
20
22
0
21 Feb 2019
Asymmetric Valleys: Beyond Sharp and Flat Local Minima
Asymmetric Valleys: Beyond Sharp and Flat Local Minima
Haowei He
Gao Huang
Yang Yuan
ODL
MLT
28
148
0
02 Feb 2019
Previous
123
Next