ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03519
  4. Cited By
Do ImageNet-trained models learn shortcuts? The impact of frequency shortcuts on generalization
v1v2 (latest)

Do ImageNet-trained models learn shortcuts? The impact of frequency shortcuts on generalization

5 March 2025
Shunxin Wang
Raymond N. J. Veldhuis
N. Strisciuglio
    VLM
ArXiv (abs)PDFHTML

Papers citing "Do ImageNet-trained models learn shortcuts? The impact of frequency shortcuts on generalization"

21 / 21 papers shown
Title
Neural Redshift: Random Networks are not Random Functions
Neural Redshift: Random Networks are not Random Functions
Damien Teney
A. Nicolicioiu
Valentin Hartmann
Ehsan Abbasnejad
149
25
0
04 Mar 2024
DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning
DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning
Shunxin Wang
C. Brune
Raymond N. J. Veldhuis
N. Strisciuglio
AAML
49
4
0
12 Aug 2023
Can contrastive learning avoid shortcut solutions?
Can contrastive learning avoid shortcut solutions?
Joshua Robinson
Li Sun
Ke Yu
Kayhan Batmanghelich
Stefanie Jegelka
S. Sra
SSL
74
145
0
21 Jun 2021
Escaping the Big Data Paradigm with Compact Transformers
Escaping the Big Data Paradigm with Compact Transformers
Ali Hassani
Steven Walton
Nikhil Shah
Abulikemu Abuduweili
Jiachen Li
Humphrey Shi
120
462
0
12 Apr 2021
A Comprehensive Evaluation Framework for Deep Model Robustness
A Comprehensive Evaluation Framework for Deep Model Robustness
Jun Guo
Wei Bao
Jiakai Wang
Yuqing Ma
Xing Gao
Gang Xiao
Aishan Liu
Zehao Zhao
Xianglong Liu
Wenjun Wu
AAMLELM
62
60
0
24 Jan 2021
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
234
194
0
13 Jan 2021
Gradient Starvation: A Learning Proclivity in Neural Networks
Gradient Starvation: A Learning Proclivity in Neural Networks
Mohammad Pezeshki
Sekouba Kaba
Yoshua Bengio
Aaron Courville
Doina Precup
Guillaume Lajoie
MLT
131
268
0
18 Nov 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
322
703
0
19 Oct 2020
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution
  Generalization
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
Dan Hendrycks
Steven Basart
Norman Mu
Saurav Kadavath
Frank Wang
...
Samyak Parajuli
Mike Guo
Basel Alomair
Jacob Steinhardt
Justin Gilmer
OOD
347
1,751
0
29 Jun 2020
The Pitfalls of Simplicity Bias in Neural Networks
The Pitfalls of Simplicity Bias in Neural Networks
Harshay Shah
Kaustav Tamuly
Aditi Raghunathan
Prateek Jain
Praneeth Netrapalli
AAML
69
361
0
13 Jun 2020
Shortcut Learning in Deep Neural Networks
Shortcut Learning in Deep Neural Networks
Robert Geirhos
J. Jacobsen
Claudio Michaelis
R. Zemel
Wieland Brendel
Matthias Bethge
Felix Wichmann
214
2,056
0
16 Apr 2020
Towards Understanding the Spectral Bias of Deep Learning
Towards Understanding the Spectral Bias of Deep Learning
Yuan Cao
Zhiying Fang
Yue Wu
Ding-Xuan Zhou
Quanquan Gu
101
219
0
03 Dec 2019
Learning De-biased Representations with Biased Representations
Learning De-biased Representations with Biased Representations
Hyojin Bahng
Sanghyuk Chun
Sangdoo Yun
Jaegul Choo
Seong Joon Oh
OOD
382
281
0
07 Oct 2019
Learning Robust Global Representations by Penalizing Local Predictive
  Power
Learning Robust Global Representations by Penalizing Local Predictive Power
Haohan Wang
Songwei Ge
Eric Xing
Zachary Chase Lipton
OOD
122
962
0
29 May 2019
Benchmarking Neural Network Robustness to Common Corruptions and
  Perturbations
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Dan Hendrycks
Thomas G. Dietterich
OODVLM
191
3,445
0
28 Mar 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
102
1,017
0
26 Feb 2019
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural
  Networks
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks
Zhi-Qin John Xu
Yaoyu Zhang
Yaoyu Zhang
Yan Xiao
Zheng Ma
124
519
0
19 Jan 2019
Certified Robustness to Adversarial Examples with Differential Privacy
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILMAAML
96
939
0
09 Feb 2018
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
321
20,070
0
07 Oct 2016
Why is Posterior Sampling Better than Optimism for Reinforcement
  Learning?
Why is Posterior Sampling Better than Optimism for Reinforcement Learning?
Ian Osband
Benjamin Van Roy
BDL
83
261
0
01 Jul 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,322
0
10 Dec 2015
1