Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1904.00760
Cited By
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
20 March 2019
Wieland Brendel
Matthias Bethge
SSL
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet"
50 / 312 papers shown
Title
InfoCons: Identifying Interpretable Critical Concepts in Point Clouds via Information Theory
Feifei Li
Mi Zhang
Zhaoxiang Wang
Min Yang
135
0
0
26 May 2025
Soft-CAM: Making black box models self-explainable for high-stakes decisions
K. Djoumessi
Philipp Berens
FAtt
BDL
233
0
0
23 May 2025
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Indu Panigrahi
Sunnie S. Y. Kim
Amna Liaqat
Rohan Jinturkar
Olga Russakovsky
Ruth C. Fong
Parastoo Abtahi
FAtt
HAI
253
1
0
14 Apr 2025
A Hybrid Fully Convolutional CNN-Transformer Model for Inherently Interpretable Medical Image Classification
K. Djoumessi
Samuel Ofosu Mensah
Philipp Berens
ViT
MedIm
58
0
0
11 Apr 2025
v-CLR: View-Consistent Learning for Open-World Instance Segmentation
Chang-Bin Zhang
Jinhong Ni
Yujie Zhong
Kai Han
3DV
VLM
179
0
0
02 Apr 2025
Beyond Accuracy: What Matters in Designing Well-Behaved Models?
Robin Hesse
Doğukan Bağcı
Bernt Schiele
Simone Schaub-Meyer
Stefan Roth
VLM
112
0
0
21 Mar 2025
Birds look like cars: Adversarial analysis of intrinsically interpretable deep learning
Hubert Baniecki
P. Biecek
AAML
122
0
0
11 Mar 2025
i-WiViG: Interpretable Window Vision GNN
Ivica Obadic
D. Kangin
Dario Augusto Borges Oliveira
Plamen Angelov
Xiao Xiang Zhu
131
0
0
11 Mar 2025
Disentangling Visual Transformers: Patch-level Interpretability for Image Classification
Guillaume Jeanneret
Loïc Simon
F. Jurie
ViT
158
0
0
24 Feb 2025
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
268
0
0
10 Oct 2024
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
Łukasz Struski
Dawid Rymarczyk
Jacek Tabor
120
1
0
16 Sep 2024
Say My Name: a Model's Bias Discovery Framework
Massimiliano Ciranni
Luca Molinaro
C. Barbano
Attilio Fiandrotti
Vittorio Murino
Vito Paolo Pastore
Enzo Tartaglione
95
3
0
18 Aug 2024
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
91
3
0
16 Jul 2024
Knowledge distillation to effectively attain both region-of-interest and global semantics from an image where multiple objects appear
Seonwhee Jin
49
0
0
11 Jul 2024
This actually looks like that: Proto-BagNets for local and global interpretability-by-design
K. Djoumessi
B. Bah
Laura Kühlewein
Philipp Berens
Lisa M. Koch
FAtt
75
2
0
21 Jun 2024
Real-Time Deepfake Detection in the Real-World
Bar Cavia
Eliahu Horwitz
Tal Reiss
Yedid Hoshen
119
8
0
13 Jun 2024
Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals
Susu Sun
S. Woerner
Andreas Maier
Lisa M. Koch
Christian F. Baumgartner
FAtt
77
1
0
08 Jun 2024
How Video Meetings Change Your Expression
Sumit Sarin
Utkarsh Mall
Purva Tendulkar
Carl Vondrick
CVBM
97
0
0
03 Jun 2024
Improving Accuracy-robustness Trade-off via Pixel Reweighted Adversarial Training
Jiacheng Zhang
Feng Liu
Dawei Zhou
Jingfeng Zhang
Tongliang Liu
AAML
63
4
0
02 Jun 2024
Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning
Jacob Mitchell Springer
Vaishnavh Nagarajan
Aditi Raghunathan
120
6
0
30 May 2024
LucidPPN: Unambiguous Prototypical Parts Network for User-centric Interpretable Computer Vision
Mateusz Pach
Dawid Rymarczyk
K. Lewandowska
Jacek Tabor
Bartosz Zieliñski
80
8
0
23 May 2024
CICA: Content-Injected Contrastive Alignment for Zero-Shot Document Image Classification
Sankalp Sinha
Muhammad Gul Zain Ali Khan
Talha Uddin Sheikh
Didier Stricker
Muhammad Zeshan Afzal
VLM
39
1
0
06 May 2024
ExMap: Leveraging Explainability Heatmaps for Unsupervised Group Robustness to Spurious Correlations
Rwiddhi Chakraborty
Adrian Sletten
Michael C. Kampffmeyer
107
1
0
20 Mar 2024
Towards White Box Deep Learning
Maciej Satkiewicz
AAML
88
1
0
14 Mar 2024
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
101
4
0
14 Mar 2024
Trapped in texture bias? A large scale comparison of deep instance segmentation
J. Theodoridis
Jessica Hofmann
J. Maucher
A. Schilling
SSeg
77
5
0
17 Jan 2024
Seeing the roads through the trees: A benchmark for modeling spatial dependencies with aerial imagery
Caleb Robinson
Isaac Corley
Anthony Ortiz
Rahul Dodhia
J. L. Ferres
Peyman Najafirad
54
0
0
12 Jan 2024
TPatch: A Triggered Physical Adversarial Patch
Wenjun Zhu
Xiaoyu Ji
Yushi Cheng
Shibo Zhang
Wei Dong
AAML
108
27
0
30 Dec 2023
Foveation in the Era of Deep Learning
George Killick
Paul Henderson
Paul Siebert
Gerardo Aragon Camarasa
FedML
82
2
0
03 Dec 2023
Few-shot Shape Recognition by Learning Deep Shape-aware Features
Wenlong Shi
Changsheng Lu
Ming Shao
Yinjie Zhang
Si-Yu Xia
Piotr Koniusz
88
2
0
03 Dec 2023
Trustworthy Large Models in Vision: A Survey
Ziyan Guo
Li Xu
Jun Liu
MU
130
0
0
16 Nov 2023
Harnessing Synthetic Datasets: The Role of Shape Bias in Deep Neural Network Generalization
Elior Benarous
Sotiris Anagnostidis
Luca Biggio
Thomas Hofmann
74
3
0
10 Nov 2023
Emergence of Shape Bias in Convolutional Neural Networks through Activation Sparsity
Tianqin Li
Ziqi Wen
Yangfan Li
Tai Sing Lee
47
11
0
29 Oct 2023
A General Framework for Robust G-Invariance in G-Equivariant Networks
Sophia Sanborn
Nina Miolane
AAML
OOD
84
4
0
28 Oct 2023
Detection Defenses: An Empty Promise against Adversarial Patch Attacks on Optical Flow
Erik Scheurer
Jenny Schmalfuss
Alexander Lis
Andrés Bruhn
AAML
76
6
0
26 Oct 2023
This Reads Like That: Deep Learning for Interpretable Natural Language Processing
Claudio Fanconi
Moritz Vandenhirtz
Severin Husmann
Julia E. Vogt
FAtt
70
2
0
25 Oct 2023
PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses
Chong Xiang
Tong Wu
Sihui Dai
Jonathan Petit
Suman Jana
Prateek Mittal
122
6
0
19 Oct 2023
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
Boyang Zhang
Zheng Li
Ziqing Yang
Xinlei He
Michael Backes
Mario Fritz
Yang Zhang
106
6
0
19 Oct 2023
Latent Diffusion Counterfactual Explanations
Karim Farid
Simon Schrodi
Max Argus
Thomas Brox
DiffM
99
14
0
10 Oct 2023
DeViL: Decoding Vision features into Language
Meghal Dani
Isabel Rio-Torto
Stephan Alaniz
Zeynep Akata
VLM
82
8
0
04 Sep 2023
FACET: Fairness in Computer Vision Evaluation Benchmark
Laura Gustafson
Chloe Rolland
Nikhila Ravi
Quentin Duval
Aaron B. Adcock
Cheng-Yang Fu
Melissa Hall
Candace Ross
VLM
EGVM
117
40
0
31 Aug 2023
Video BagNet: short temporal receptive fields increase robustness in long-term action recognition
Ombretta Strafforello
X. Liu
Klamer Schutte
Jan van Gemert
42
2
0
22 Aug 2023
Foundation Model-oriented Robustness: Robust Image Model Evaluation with Pretrained Models
Peiyan Zhang
Hao Liu
Chaozhuo Li
Xing Xie
Sunghun Kim
Haohan Wang
VLM
OOD
126
8
0
21 Aug 2023
ASPIRE: Language-Guided Data Augmentation for Improving Robustness Against Spurious Correlations
Sreyan Ghosh
Chandra Kiran Reddy Evuru
Sonal Kumar
Utkarsh Tyagi
Sakshi Singh
Sanjoy Chowdhury
Dinesh Manocha
OOD
61
1
0
19 Aug 2023
Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts Explanations
Mikolaj Sacha
Bartosz Jura
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
70
14
0
16 Aug 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
86
34
0
11 Aug 2023
A Majority Invariant Approach to Patch Robustness Certification for Deep Learning Models
Qili Zhou
Zhengyuan Wei
Haipeng Wang
William Chan
AAML
70
0
0
01 Aug 2023
The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers
Meike Nauta
Christin Seifert
99
12
0
26 Jul 2023
Right for the Wrong Reason: Can Interpretable ML Techniques Detect Spurious Correlations?
Susu Sun
Lisa M. Koch
Christian F. Baumgartner
94
16
0
23 Jul 2023
Complementary Frequency-Varying Awareness Network for Open-Set Fine-Grained Image Recognition
Qiulei Dong
Hong Wang
Qiulei Dong
104
0
0
14 Jul 2023
1
2
3
4
5
6
7
Next