Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1806.07421
Cited By
RISE: Randomized Input Sampling for Explanation of Black-box Models
19 June 2018
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"RISE: Randomized Input Sampling for Explanation of Black-box Models"
50 / 652 papers shown
Title
Rethinking interpretation: Input-agnostic saliency mapping of deep visual classifiers
Naveed Akhtar
M. Jalwana
FAtt
AAML
19
7
0
31 Mar 2023
Model-agnostic explainable artificial intelligence for object detection in image data
M. Moradi
Ke Yan
David Colwell
Matthias Samwald
Rhona Asgari
AAML
41
7
0
30 Mar 2023
Are Data-driven Explanations Robust against Out-of-distribution Data?
Tang Li
Fengchun Qiao
Mengmeng Ma
Xiangkai Peng
OODD
OOD
33
10
0
29 Mar 2023
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
35
0
0
27 Mar 2023
Explainable Image Quality Assessment for Medical Imaging
Caner Ozer
Arda Güler
A. Cansever
Ilkay Oksuz
29
2
0
25 Mar 2023
IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Ruo Yang
Binghui Wang
M. Bilgic
37
19
0
24 Mar 2023
Better Understanding Differences in Attribution Methods via Systematic Evaluations
Sukrut Rao
Moritz D Boehle
Bernt Schiele
XAI
29
2
0
21 Mar 2023
Leaping Into Memories: Space-Time Deep Feature Synthesis
Alexandros Stergiou
Nikos Deligiannis
34
0
0
17 Mar 2023
Explainable GeoAI: Can saliency maps help interpret artificial intelligence's learning process? An empirical study on natural feature detection
Chia-Yu Hsu
Wenwen Li
AAML
FAtt
19
34
0
16 Mar 2023
Empowering CAM-Based Methods with Capability to Generate Fine-Grained and High-Faithfulness Explanations
Changqing Qiu
Fusheng Jin
Yining Zhang
FAtt
22
2
0
16 Mar 2023
EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models
Ian E. Nielsen
Ravichandran Ramachandran
N. Bouaynaya
Hassan M. Fathallah-Shaykh
Ghulam Rasool
AAML
FAtt
41
7
0
15 Mar 2023
Towards Trust of Explainable AI in Thyroid Nodule Diagnosis
Hung Truong Thanh Nguyen
Van Binh Truong
V. Nguyen
Quoc Hung Cao
Quoc Khanh Nguyen
14
13
0
08 Mar 2023
CoRTX: Contrastive Framework for Real-time Explanation
Yu-Neng Chuang
Guanchu Wang
Fan Yang
Quan-Gen Zhou
Pushkar Tripathi
Xuanting Cai
Xia Hu
46
20
0
05 Mar 2023
Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators in Neural Networks
L. Brocki
N. C. Chung
FAtt
AAML
51
11
0
02 Mar 2023
Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science
P. Bommer
M. Kretschmer
Anna Hedström
Dilyara Bareeva
Marina M.-C. Höhne
54
38
0
01 Mar 2023
SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective
Xiwei Xuan
Ziquan Deng
Hsuan-Tien Lin
Z. Kong
Kwan-Liu Ma
AAML
FAtt
37
2
0
01 Mar 2023
Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation
N. Jethani
A. Saporta
Rajesh Ranganath
FAtt
29
11
0
24 Feb 2023
The Generalizability of Explanations
Hanxiao Tan
FAtt
18
1
0
23 Feb 2023
On The Coherence of Quantitative Evaluation of Visual Explanations
Benjamin Vandersmissen
José Oramas
XAI
FAtt
36
3
0
14 Feb 2023
Explaining text classifiers through progressive neighborhood approximation with realistic samples
Yi Cai
Arthur Zimek
Eirini Ntoutsi
Gerhard Wunder
AI4TS
22
0
0
11 Feb 2023
A novel approach to generate datasets with XAI ground truth to evaluate image models
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
22
4
0
11 Feb 2023
PAMI: partition input and aggregate outputs for model interpretation
Wei Shi
Wentao Zhang
Weishi Zheng
Ruixuan Wang
FAtt
26
3
0
07 Feb 2023
Efficient XAI Techniques: A Taxonomic Survey
Yu-Neng Chuang
Guanchu Wang
Fan Yang
Zirui Liu
Xuanting Cai
Mengnan Du
Xia Hu
24
32
0
07 Feb 2023
Salient Conditional Diffusion for Defending Against Backdoor Attacks
Brandon B. May
N. Joseph Tatro
Dylan Walker
Piyush Kumar
N. Shnidman
DiffM
33
7
0
31 Jan 2023
Supporting Safety Analysis of Image-processing DNNs through Clustering-based Approaches
M. Attaoui
Hazem M. Fahmy
F. Pastore
Lionel C. Briand
AI4CE
21
4
0
31 Jan 2023
A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics
Naveed Akhtar
XAI
VLM
35
7
0
31 Jan 2023
Distilling Cognitive Backdoor Patterns within an Image
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
AAML
34
24
0
26 Jan 2023
Holistically Explainable Vision Transformers
Moritz D Boehle
Mario Fritz
Bernt Schiele
ViT
41
9
0
20 Jan 2023
Sanity checks and improvements for patch visualisation in prototype-based image classification
Romain Xu-Darme
Georges Quénot
Zakaria Chihani
M. Rousset
10
3
0
20 Jan 2023
TAME: Attention Mechanism Based Feature Fusion for Generating Explanation Maps of Convolutional Neural Networks
Mariano V. Ntrougkas
Nikolaos Gkalelis
Vasileios Mezaris
FAtt
19
8
0
18 Jan 2023
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
41
22
0
17 Jan 2023
Negative Flux Aggregation to Estimate Feature Attributions
X. Li
Deng Pan
Chengyin Li
Yao Qiang
D. Zhu
FAtt
8
6
0
17 Jan 2023
CORE: Learning Consistent Ordinal REpresentations for Image Ordinal Estimation
Yiming Lei
Zilong Li
Yangyang Li
Junping Zhang
Hongming Shan
28
3
0
15 Jan 2023
Rationalizing Predictions by Adversarial Information Calibration
Lei Sha
Oana-Maria Camburu
Thomas Lukasiewicz
27
4
0
15 Jan 2023
Hierarchical Dynamic Masks for Visual Explanation of Neural Networks
Yitao Peng
Longzhen Yang
Yihang Liu
Lianghua He
FAtt
16
3
0
12 Jan 2023
Learning Support and Trivial Prototypes for Interpretable Image Classification
Chong Wang
Yuyuan Liu
Yuanhong Chen
Fengbei Liu
Yu Tian
Davis J. McCarthy
Helen Frazer
G. Carneiro
39
24
0
08 Jan 2023
Explaining Imitation Learning through Frames
Boyuan Zheng
Jianlong Zhou
Chun-Hao Liu
Yiqiao Li
Fang Chen
14
0
0
03 Jan 2023
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
50
18
0
30 Dec 2022
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Md. Rezaul Karim
Tanhim Islam
Oya Beyan
Christoph Lange
Michael Cochez
Dietrich-Rebholz Schuhmann
Stefan Decker
29
68
0
25 Dec 2022
Security and Interpretability in Automotive Systems
Shailja Thakur
AAML
21
0
0
23 Dec 2022
Impossibility Theorems for Feature Attribution
Blair Bilodeau
Natasha Jaques
Pang Wei Koh
Been Kim
FAtt
20
68
0
22 Dec 2022
DExT: Detector Explanation Toolkit
Deepan Padmanabhan
Paul G. Plöger
Octavio Arriaga
Matias Valdenegro-Toro
38
2
0
21 Dec 2022
Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint
Borui Zhang
Wenzhao Zheng
Jie Zhou
Jiwen Lu
AAML
25
7
0
18 Dec 2022
MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks
Letitia Parcalabescu
Anette Frank
37
22
0
15 Dec 2022
Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods
Ming-Xiu Jiang
Saeed Khorram
Li Fuxin
FAtt
27
9
0
13 Dec 2022
Ensuring Visual Commonsense Morality for Text-to-Image Generation
Seong-Oak Park
Suhong Moon
Jinkyu Kim
16
2
0
07 Dec 2022
Evaluation of Explanation Methods of AI -- CNNs in Image Classification Tasks with Reference-based and No-reference Metrics
A. Zhukov
J. Benois-Pineau
R. Giot
22
5
0
02 Dec 2022
Towards More Robust Interpretation via Local Gradient Alignment
Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
FAtt
33
5
0
29 Nov 2022
Interactive Visual Feature Search
Devon Ulrich
Ruth C. Fong
FAtt
29
0
0
28 Nov 2022
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
35
18
0
27 Nov 2022
Previous
1
2
3
...
6
7
8
...
12
13
14
Next