ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1312.6034
  4. Cited By
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps

Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

20 December 2013
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
    FAtt
ArXivPDFHTML

Papers citing "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps"

50 / 83 papers shown
Title
Soft-CAM: Making black box models self-explainable for high-stakes decisions
K. Djoumessi
Philipp Berens
FAtt
BDL
82
0
0
23 May 2025
Proto-FG3D: Prototype-based Interpretable Fine-Grained 3D Shape Classification
Proto-FG3D: Prototype-based Interpretable Fine-Grained 3D Shape Classification
Shuxian Ma
Zihao Dong
Runmin Cong
Sam Kwong
Xiuli Shao
34
0
0
23 May 2025
Minimizing False-Positive Attributions in Explanations of Non-Linear Models
Minimizing False-Positive Attributions in Explanations of Non-Linear Models
Anders Gjølbye
Stefan Haufe
Lars Kai Hansen
96
0
0
16 May 2025
Probabilistic Stability Guarantees for Feature Attributions
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
56
0
0
18 Apr 2025
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Indu Panigrahi
Sunnie S. Y. Kim
Amna Liaqat
Rohan Jinturkar
Olga Russakovsky
Ruth C. Fong
Parastoo Abtahi
FAtt
HAI
125
1
0
14 Apr 2025
Towards Combinatorial Interpretability of Neural Computation
Towards Combinatorial Interpretability of Neural Computation
Micah Adler
Dan Alistarh
Nir Shavit
FAtt
281
1
0
10 Apr 2025
Explainable AI-Based Interface System for Weather Forecasting Model
Explainable AI-Based Interface System for Weather Forecasting Model
Soyeon Kim
Junho Choi
Yeji Choi
Subeen Lee
Artyom Stitsyuk
Minkyoung Park
Seongyeop Jeong
Youhyun Baek
Jaesik Choi
XAI
79
2
0
01 Apr 2025
Investigating the Duality of Interpretability and Explainability in Machine Learning
Investigating the Duality of Interpretability and Explainability in Machine Learning
Moncef Garouani
Josiane Mothe
Ayah Barhrhouj
Julien Aligon
AAML
61
2
0
27 Mar 2025
Axiomatic Explainer Globalness via Optimal Transport
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
131
1
0
13 Mar 2025
Revealing Unintentional Information Leakage in Low-Dimensional Facial Portrait Representations
Kathleen Anderson
Thomas Martinetz
CVBM
87
0
0
12 Mar 2025
Generalizable and Explainable Deep Learning for Medical Image Computing: An Overview
Ahmad Chaddad
Yan Hu
Yihang Wu
Binbin Wen
R. Kateb
75
6
0
11 Mar 2025
FW-Shapley: Real-time Estimation of Weighted Shapley Values
Pranoy Panda
Siddharth Tandon
V. Balasubramanian
TDI
83
0
0
09 Mar 2025
Interpretable Visualizations of Data Spaces for Classification Problems
Interpretable Visualizations of Data Spaces for Classification Problems
Christian Jorgensen
Arthur Y. Lin
Rhushil Vasavada
Rose K. Cersonsky
46
0
0
07 Mar 2025
Interpreting CLIP with Hierarchical Sparse Autoencoders
Interpreting CLIP with Hierarchical Sparse Autoencoders
Vladimir Zaigrajew
Hubert Baniecki
P. Biecek
115
1
0
27 Feb 2025
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
87
1
0
27 Feb 2025
Selective Prompt Anchoring for Code Generation
Selective Prompt Anchoring for Code Generation
Yuan Tian
Tianyi Zhang
111
3
0
24 Feb 2025
Disentangling Visual Transformers: Patch-level Interpretability for Image Classification
Disentangling Visual Transformers: Patch-level Interpretability for Image Classification
Guillaume Jeanneret
Loïc Simon
F. Jurie
ViT
94
0
0
24 Feb 2025
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Class-Dependent Perturbation Effects in Evaluating Time Series Attributions
Gregor Baer
Isel Grau
Chao Zhang
Pieter Van Gorp
AAML
57
0
0
24 Feb 2025
Explainable Neural Networks with Guarantees: A Sparse Estimation Approach
Explainable Neural Networks with Guarantees: A Sparse Estimation Approach
Antoine Ledent
Peng Liu
FAtt
195
0
0
20 Feb 2025
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Jon Vadillo
Roberto Santana
J. A. Lozano
Marta Z. Kwiatkowska
BDL
AAML
100
0
0
17 Feb 2025
Error-controlled non-additive interaction discovery in machine learning models
Error-controlled non-additive interaction discovery in machine learning models
Winston Chen
Yifan Jiang
William Stafford Noble
Yang Young Lu
74
1
0
17 Feb 2025
Explaining 3D Computed Tomography Classifiers with Counterfactuals
Explaining 3D Computed Tomography Classifiers with Counterfactuals
Joseph Paul Cohen
Louis Blankemeier
Akshay S. Chaudhari
MedIm
350
1
0
11 Feb 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
105
2
0
28 Jan 2025
Path Analysis for Effective Fault Localization in Deep Neural Networks
Path Analysis for Effective Fault Localization in Deep Neural Networks
Soroush Hashemifar
Saeed Parsa
A. Kalaee
AAML
53
0
0
28 Jan 2025
Efficient and Interpretable Neural Networks Using Complex Lehmer Transform
M. Ataei
Xiaogang Wang
48
0
0
28 Jan 2025
Generating visual explanations from deep networks using implicit neural representations
Generating visual explanations from deep networks using implicit neural representations
Michal Byra
Henrik Skibbe
GAN
FAtt
51
0
0
20 Jan 2025
Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning
Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning
Numair Sani
Daniel Malinsky
I. Shpitser
CML
90
16
0
10 Jan 2025
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Tobias Leemann
Alina Fastowski
Felix Pfeiffer
Gjergji Kasneci
92
5
0
10 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
95
4
0
03 Jan 2025
FitCF: A Framework for Automatic Feature Importance-guided Counterfactual Example Generation
FitCF: A Framework for Automatic Feature Importance-guided Counterfactual Example Generation
Qianli Wang
Nils Feldhus
Simon Ostermann
Luis Felipe Villa-Arenas
Sebastian Möller
Vera Schmitt
AAML
61
1
0
01 Jan 2025
Accurate Explanation Model for Image Classifiers using Class Association Embedding
Accurate Explanation Model for Image Classifiers using Class Association Embedding
Ruitao Xie
Jingbang Chen
Limai Jiang
Rui Xiao
Yi-Lun Pan
Yunpeng Cai
121
4
0
31 Dec 2024
Explaining the Impact of Training on Vision Models via Activation Clustering
Explaining the Impact of Training on Vision Models via Activation Clustering
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
142
0
0
29 Nov 2024
Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
Luis M. Lopez-Ramos
Florian Leiser
Aditya Rastogi
Steven Hicks
Inga Strümke
V. Madai
Tobias Budig
Ali Sunyaev
A. Hilbert
101
2
0
07 Nov 2024
GraphXAIN: Narratives to Explain Graph Neural Networks
GraphXAIN: Narratives to Explain Graph Neural Networks
Mateusz Cedro
David Martens
75
0
0
04 Nov 2024
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
Dennis Fucci
Marco Gaido
Beatrice Savoldi
Matteo Negri
Mauro Cettolo
L. Bentivogli
127
2
0
03 Nov 2024
Unlearning-based Neural Interpretations
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
107
0
0
10 Oct 2024
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Zana Buçinca
S. Swaroop
Amanda E. Paluch
Finale Doshi-Velez
Krzysztof Z. Gajos
69
2
0
05 Oct 2024
Inferring Thunderstorm Occurrence from Vertical Profiles of Convection-Permitting Simulations: Physical Insights from a Physical Deep Learning Model
Inferring Thunderstorm Occurrence from Vertical Profiles of Convection-Permitting Simulations: Physical Insights from a Physical Deep Learning Model
Kianusch Vahid Yousefnia
Tobias Bölle
Christoph Metzl
138
0
0
30 Sep 2024
Explanation Space: A New Perspective into Time Series Interpretability
Explanation Space: A New Perspective into Time Series Interpretability
Shahbaz Rezaei
Xin Liu
AI4TS
84
1
0
02 Sep 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAI
AI4TS
153
31
0
30 Aug 2024
Smooth InfoMax -- Towards easier Post-Hoc interpretability
Smooth InfoMax -- Towards easier Post-Hoc interpretability
Fabian Denoodt
Bart de Boer
José Oramas
60
2
0
23 Aug 2024
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi
Yadollah Yaghoobzadeh
64
0
0
21 Aug 2024
Automatic rating of incomplete hippocampal inversions evaluated across multiple cohorts
Automatic rating of incomplete hippocampal inversions evaluated across multiple cohorts
Lisa Hemforth
B. Couvy-Duchesne
Kevin de Matos
Camille Brianceau
Matthieu Joulot
...
V. Frouin
Alexandre Martin
IMAGEN study group
C. Cury
O. Colliot
42
1
0
05 Aug 2024
Interpreting artificial neural networks to detect genome-wide association signals for complex traits
Interpreting artificial neural networks to detect genome-wide association signals for complex traits
Burak Yelmen
Maris Alver
Estonian Biobank Research Team
Flora Jay
L. Milani
Lili Milani
53
0
0
26 Jul 2024
I2AM: Interpreting Image-to-Image Latent Diffusion Models via Bi-Attribution Maps
I2AM: Interpreting Image-to-Image Latent Diffusion Models via Bi-Attribution Maps
Junseo Park
Hyeryung Jang
136
1
0
17 Jul 2024
Towards Understanding Multi-Task Learning (Generalization) of LLMs via Detecting and Exploring Task-Specific Neurons
Towards Understanding Multi-Task Learning (Generalization) of LLMs via Detecting and Exploring Task-Specific Neurons
Yongqi Leng
Deyi Xiong
56
7
0
09 Jul 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
89
1
0
01 Jul 2024
Revealing the Learning Process in Reinforcement Learning Agents Through Attention-Oriented Metrics
Revealing the Learning Process in Reinforcement Learning Agents Through Attention-Oriented Metrics
Charlotte Beylier
Simon M. Hofmann
Nico Scherf
81
0
0
20 Jun 2024
CELL your Model: Contrastive Explanations for Large Language Models
CELL your Model: Contrastive Explanations for Large Language Models
Ronny Luss
Erik Miehling
Amit Dhurandhar
65
0
0
17 Jun 2024
Listenable Maps for Zero-Shot Audio Classifiers
Listenable Maps for Zero-Shot Audio Classifiers
Francesco Paissan
Luca Della Libera
Mirco Ravanelli
Cem Subakan
47
4
0
27 May 2024
12
Next