ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.19700
  4. Cited By
Explaining the Impact of Training on Vision Models via Activation Clustering
v1v2v3 (latest)

Explaining the Impact of Training on Vision Models via Activation Clustering

29 November 2024
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
ArXiv (abs)PDFHTML

Papers citing "Explaining the Impact of Training on Vision Models via Activation Clustering"

43 / 43 papers shown
Title
BackMix: Mitigating Shortcut Learning in Echocardiography with Minimal
  Supervision
BackMix: Mitigating Shortcut Learning in Echocardiography with Minimal Supervision
Kit M. Bransby
A. Beqiri
Woo-Jin Cho Kim
Jorge Oliveira
A. Chartsias
Alberto Gomez
75
1
0
27 Jun 2024
Prototypical Self-Explainable Models Without Re-training
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
97
2
0
13 Dec 2023
Vision Transformers Need Registers
Vision Transformers Need Registers
Zilong Chen
Maxime Oquab
Julien Mairal
Huaping Liu
ViT
201
356
0
28 Sep 2023
Weakly-supervised Contrastive Learning for Unsupervised Object Discovery
Weakly-supervised Contrastive Learning for Unsupervised Object Discovery
Yun-Qiu Lv
Jing Zhang
Nick Barnes
Yuchao Dai
79
11
0
07 Jul 2023
XAI-TRIS: Non-linear image benchmarks to quantify false positive
  post-hoc attribution of feature importance
XAI-TRIS: Non-linear image benchmarks to quantify false positive post-hoc attribution of feature importance
Benedict Clark
Rick Wilming
Stefan Haufe
100
5
0
22 Jun 2023
A Holistic Approach to Unifying Automatic Concept Extraction and Concept
  Importance Estimation
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
Thomas Fel
Victor Boutin
Mazda Moayeri
Rémi Cadène
Louis Bethune
Léo Andéol
Mathieu Chalvidal
Thomas Serre
FAtt
94
64
0
11 Jun 2023
Explain Any Concept: Segment Anything Meets Concept-Based Explanation
Explain Any Concept: Segment Anything Meets Concept-Based Explanation
Ao Sun
Pingchuan Ma
Yuanyuan Yuan
Shuai Wang
LLMAG
112
36
0
17 May 2023
DINOv2: Learning Robust Visual Features without Supervision
DINOv2: Learning Robust Visual Features without Supervision
Maxime Oquab
Timothée Darcet
Théo Moutakanni
Huy Q. Vo
Marc Szafraniec
...
Hervé Jégou
Julien Mairal
Patrick Labatut
Armand Joulin
Piotr Bojanowski
VLMCLIPSSL
448
3,529
0
14 Apr 2023
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Lorenz Linhardt
Klaus-Robert Muller
G. Montavon
AAML
109
8
0
12 Apr 2023
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
120
20
0
30 Dec 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
103
116
0
17 Nov 2022
Demonstrating The Risk of Imbalanced Datasets in Chest X-ray Image-based
  Diagnostics by Prototypical Relevance Propagation
Demonstrating The Risk of Imbalanced Datasets in Chest X-ray Image-based Diagnostics by Prototypical Relevance Propagation
Srishti Gautam
Marina M.-C. Höhne
Stine Hansen
Robert Jenssen
Michael C. Kampffmeyer
59
7
0
10 Jan 2022
RELAX: Representation Learning Explainability
RELAX: Representation Learning Explainability
Kristoffer Wickstrøm
Daniel J. Trosten
Sigurd Løkse
Ahcène Boubekki
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
Robert Jenssen
FAtt
49
15
0
19 Dec 2021
Localizing Objects with Self-Supervised Transformers and no Labels
Localizing Objects with Self-Supervised Transformers and no Labels
Oriane Siméoni
Gilles Puy
Huy V. Vo
Simon Roburin
Spyros Gidaris
Andrei Bursuc
P. Pérez
Renaud Marlet
Jean Ponce
ViT
260
203
0
29 Sep 2021
Instance-wise or Class-wise? A Tale of Neighbor Shapley for
  Concept-based Explanation
Instance-wise or Class-wise? A Tale of Neighbor Shapley for Concept-based Explanation
Jiahui Li
Kun Kuang
Lin Li
Long Chen
Songyang Zhang
Jian Shao
Jun Xiao
FAtt
106
19
0
03 Sep 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
812
6,149
0
29 Apr 2021
Transformer Interpretability Beyond Attention Visualization
Transformer Interpretability Beyond Attention Visualization
Hila Chefer
Shir Gur
Lior Wolf
145
676
0
17 Dec 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
736
41,796
0
22 Oct 2020
Explainable Face Recognition
Explainable Face Recognition
Jonathan R. Williford
Brandon B. May
J. Byrne
CVBM
69
71
0
03 Aug 2020
Shortcut Learning in Deep Neural Networks
Shortcut Learning in Deep Neural Networks
Robert Geirhos
J. Jacobsen
Claudio Michaelis
R. Zemel
Wieland Brendel
Matthias Bethge
Felix Wichmann
229
2,069
0
16 Apr 2020
A Simple Framework for Contrastive Learning of Visual Representations
A Simple Framework for Contrastive Learning of Visual Representations
Ting-Li Chen
Simon Kornblith
Mohammad Norouzi
Geoffrey E. Hinton
SSL
422
18,968
0
13 Feb 2020
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
632
42,770
0
03 Dec 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan O. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
316
307
0
17 Oct 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAMLFAtt
88
335
0
19 Jun 2019
From Clustering to Cluster Explanations via Neural Networks
From Clustering to Cluster Explanations via Neural Networks
Jacob R. Kauffmann
Malte Esders
Lukas Ruff
G. Montavon
Wojciech Samek
K. Müller
79
72
0
18 Jun 2019
FCOS: Fully Convolutional One-Stage Object Detection
FCOS: Fully Convolutional One-Stage Object Detection
Zhi Tian
Chunhua Shen
Hao Chen
Tong He
ObjD
198
5,031
0
02 Apr 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
106
1,022
0
26 Feb 2019
Confounding variables can degrade generalization performance of
  radiological deep learning models
Confounding variables can degrade generalization performance of radiological deep learning models
J. Zech
Marcus A. Badgeley
Manway Liu
A. Costa
J. Titano
Eric K. Oermann
OOD
91
1,185
0
02 Jul 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
304
1,191
0
27 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILMXAI
138
948
0
20 Jun 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
224
1,177
0
19 Jun 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
272
1,849
0
30 Nov 2017
Interpreting Deep Visual Representations via Network Dissection
Interpreting Deep Visual Representations via Network Dissection
Bolei Zhou
David Bau
A. Oliva
Antonio Torralba
FAttMILM
74
325
0
15 Nov 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
213
6,044
0
04 Mar 2017
Feature Pyramid Networks for Object Detection
Feature Pyramid Networks for Object Detection
Nayeon Lee
Piotr Dollár
Ross B. Girshick
Kaiming He
Bharath Hariharan
Serge J. Belongie
ObjD
528
22,242
0
09 Dec 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
508
20,227
0
07 Oct 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.3K
17,197
0
16 Feb 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.4K
195,011
0
10 Dec 2015
Striving for Simplicity: The All Convolutional Net
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
276
4,682
0
21 Dec 2014
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLMObjD
1.7K
39,705
0
01 Sep 2014
Microsoft COCO: Common Objects in Context
Microsoft COCO: Common Objects in Context
Nayeon Lee
Michael Maire
Serge J. Belongie
Lubomir Bourdev
Ross B. Girshick
James Hays
Pietro Perona
Deva Ramanan
C. L. Zitnick
Piotr Dollár
ObjD
473
43,961
0
01 May 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
339
7,333
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAttSSL
633
15,922
0
12 Nov 2013
1