ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.02720
  4. Cited By
Understanding Gradient Regularization in Deep Learning: Efficient
  Finite-Difference Computation and Implicit Bias

Understanding Gradient Regularization in Deep Learning: Efficient Finite-Difference Computation and Implicit Bias

6 October 2022
Ryo Karakida
Tomoumi Takase
Tomohiro Hayase
Kazuki Osawa
ArXivPDFHTML

Papers citing "Understanding Gradient Regularization in Deep Learning: Efficient Finite-Difference Computation and Implicit Bias"

2 / 2 papers shown
Title
Why Does Little Robustness Help? Understanding and Improving Adversarial
  Transferability from Surrogate Training
Why Does Little Robustness Help? Understanding and Improving Adversarial Transferability from Surrogate Training
Yechao Zhang
Shengshan Hu
Leo Yu Zhang
Junyu Shi
Minghui Li
Xiaogeng Liu
Wei Wan
Hai Jin
AAML
22
21
0
15 Jul 2023
Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts
  Generalization
Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization
Stanislaw Jastrzebski
Devansh Arpit
Oliver Åstrand
Giancarlo Kerg
Huan Wang
Caiming Xiong
R. Socher
Kyunghyun Cho
Krzysztof J. Geras
AI4CE
184
65
0
28 Dec 2020
1