ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.03291
  4. Cited By
Understanding Generalization through Visualizations

Understanding Generalization through Visualizations

7 June 2019
Yifan Jiang
Z. Emam
Micah Goldblum
Liam H. Fowl
J. K. Terry
Furong Huang
Tom Goldstein
    AI4CE
ArXivPDFHTML

Papers citing "Understanding Generalization through Visualizations"

27 / 27 papers shown
Title
Can Optimization Trajectories Explain Multi-Task Transfer?
Can Optimization Trajectories Explain Multi-Task Transfer?
David Mueller
Mark Dredze
Matthew Wiesner
61
1
0
26 Aug 2024
Bias of Stochastic Gradient Descent or the Architecture: Disentangling the Effects of Overparameterization of Neural Networks
Bias of Stochastic Gradient Descent or the Architecture: Disentangling the Effects of Overparameterization of Neural Networks
Amit Peleg
Matthias Hein
39
0
0
04 Jul 2024
Just How Flexible are Neural Networks in Practice?
Just How Flexible are Neural Networks in Practice?
Ravid Shwartz-Ziv
Micah Goldblum
Arpit Bansal
C. Bayan Bruss
Yann LeCun
Andrew Gordon Wilson
47
4
0
17 Jun 2024
Revisiting Confidence Estimation: Towards Reliable Failure Prediction
Revisiting Confidence Estimation: Towards Reliable Failure Prediction
Fei Zhu
Xu-Yao Zhang
Zhen Cheng
Cheng-Lin Liu
UQCV
54
10
0
05 Mar 2024
Neural Networks Learn Statistics of Increasing Complexity
Neural Networks Learn Statistics of Increasing Complexity
Nora Belrose
Quintin Pope
Lucia Quirke
Alex Troy Mallen
Xiaoli Z. Fern
18
11
0
06 Feb 2024
Sharpness-Aware Minimization Revisited: Weighted Sharpness as a
  Regularization Term
Sharpness-Aware Minimization Revisited: Weighted Sharpness as a Regularization Term
Yun Yue
Jiadi Jiang
Zhiling Ye
Ni Gao
Yongchao Liu
Kecheng Zhang
MLAU
ODL
33
11
0
25 May 2023
Probing optimisation in physics-informed neural networks
Probing optimisation in physics-informed neural networks
Nayara Fonseca
V. Guidetti
Will Trojak
47
1
0
27 Mar 2023
Rethinking Confidence Calibration for Failure Prediction
Rethinking Confidence Calibration for Failure Prediction
Fei Zhu
Zhen Cheng
Xu-Yao Zhang
Cheng-Lin Liu
UQCV
22
39
0
06 Mar 2023
DART: Diversify-Aggregate-Repeat Training Improves Generalization of
  Neural Networks
DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
Samyak Jain
Sravanti Addepalli
P. Sahu
Priyam Dey
R. Venkatesh Babu
MoMe
OOD
45
20
0
28 Feb 2023
Complex Clipping for Improved Generalization in Machine Learning
Complex Clipping for Improved Generalization in Machine Learning
L. Atlas
Nicholas Rasmussen
Felix Schwock
Mert Pilanci
25
0
0
27 Feb 2023
A picture of the space of typical learnable tasks
A picture of the space of typical learnable tasks
Rahul Ramesh
Jialin Mao
Itay Griniasty
Rubing Yang
H. Teoh
Mark K. Transtrum
James P. Sethna
Pratik Chaudhari
SSL
DRL
36
5
0
31 Oct 2022
Correlation of the importances of neural network weights calculated by
  modern methods of overcoming catastrophic forgetting
Correlation of the importances of neural network weights calculated by modern methods of overcoming catastrophic forgetting
Alexey Kutalev
16
0
0
24 Oct 2022
K-SAM: Sharpness-Aware Minimization at the Speed of SGD
K-SAM: Sharpness-Aware Minimization at the Speed of SGD
Renkun Ni
Ping Yeh-Chiang
Jonas Geiping
Micah Goldblum
A. Wilson
Tom Goldstein
26
8
0
23 Oct 2022
How Much Data Are Augmentations Worth? An Investigation into Scaling
  Laws, Invariance, and Implicit Regularization
How Much Data Are Augmentations Worth? An Investigation into Scaling Laws, Invariance, and Implicit Regularization
Jonas Geiping
Micah Goldblum
Gowthami Somepalli
Ravid Shwartz-Ziv
Tom Goldstein
A. Wilson
28
35
0
12 Oct 2022
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation
Xu Guo
Boyang Albert Li
Han Yu
VLM
43
22
0
06 Oct 2022
MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP
  Initialization
MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP Initialization
Xiaotian Han
Tong Zhao
Yozen Liu
Xia Hu
Neil Shah
GNN
60
36
0
30 Sep 2022
Linear Connectivity Reveals Generalization Strategies
Linear Connectivity Reveals Generalization Strategies
Jeevesh Juneja
Rachit Bansal
Kyunghyun Cho
João Sedoc
Naomi Saphra
244
45
0
24 May 2022
Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative
  Priors
Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors
Ravid Shwartz-Ziv
Micah Goldblum
Hossein Souri
Sanyam Kapoor
Chen Zhu
Yann LeCun
A. Wilson
UQCV
BDL
64
43
0
20 May 2022
Active Learning at the ImageNet Scale
Active Learning at the ImageNet Scale
Z. Emam
Hong-Min Chu
Ping Yeh-Chiang
W. Czaja
R. Leapman
Micah Goldblum
Tom Goldstein
32
34
0
25 Nov 2021
Stochastic Training is Not Necessary for Generalization
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
89
72
0
29 Sep 2021
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Gregory W. Benton
Wesley J. Maddox
Sanae Lotfi
A. Wilson
UQCV
33
67
0
25 Feb 2021
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure
  Dataset Release
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Liam H. Fowl
Ping Yeh-Chiang
Micah Goldblum
Jonas Geiping
Arpit Bansal
W. Czaja
Tom Goldstein
24
43
0
16 Feb 2021
Learning Optimal Representations with the Decodable Information
  Bottleneck
Learning Optimal Representations with the Decodable Information Bottleneck
Yann Dubois
Douwe Kiela
D. Schwab
Ramakrishna Vedantam
28
43
0
27 Sep 2020
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
A. Wilson
Pavel Izmailov
UQCV
BDL
OOD
24
641
0
20 Feb 2020
Generalization Guarantees for Neural Networks via Harnessing the
  Low-rank Structure of the Jacobian
Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Samet Oymak
Zalan Fabian
Mingchen Li
Mahdi Soltanolkotabi
MLT
21
88
0
12 Jun 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,896
0
15 Sep 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
127
577
0
27 Feb 2015
1