Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1806.07421
Cited By
RISE: Randomized Input Sampling for Explanation of Black-box Models
19 June 2018
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"RISE: Randomized Input Sampling for Explanation of Black-box Models"
50 / 652 papers shown
Title
Productivity, Portability, Performance: Data-Centric Python
Yiheng Wang
Yao Zhang
Yanzhang Wang
Yan Wan
Jiao Wang
Zhongyuan Wu
Yuhao Yang
Bowen She
54
95
0
01 Jul 2021
Explainable Diabetic Retinopathy Detection and Retinal Image Generation
Yuhao Niu
Lin Gu
Yitian Zhao
Feng Lu
MedIm
24
58
0
01 Jul 2021
Inverting and Understanding Object Detectors
Ang Cao
Justin Johnson
ObjD
33
3
0
26 Jun 2021
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
33
64
0
24 Jun 2021
CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency
M. Jalwana
Naveed Akhtar
Bennamoun
Ajmal Mian
27
54
0
20 Jun 2021
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise
A. Kapishnikov
Subhashini Venugopalan
Besim Avci
Benjamin D. Wedin
Michael Terry
Tolga Bolukbasi
32
91
0
17 Jun 2021
An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data
Lev V. Utkin
A. Konstantinov
Kirill Vishniakov
FAtt
29
5
0
16 Jun 2021
Keep CALM and Improve Visual Feature Attribution
Jae Myung Kim
Junsuk Choe
Zeynep Akata
Seong Joon Oh
FAtt
350
20
0
15 Jun 2021
Energy-Based Learning for Cooperative Games, with Applications to Valuation Problems in Machine Learning
Yatao Bian
Yu Rong
Tingyang Xu
Jiaxiang Wu
Andreas Krause
Junzhou Huang
40
16
0
05 Jun 2021
ZeroWaste Dataset: Towards Deformable Object Segmentation in Cluttered Scenes
D. Bashkirova
M. Abdelfattah
Ziliang Zhu
James Akl
Fadi M. Alladkani
Ping Hu
Vitaly Ablavsky
B. Çalli
Sarah Adel Bargal
Kate Saenko
20
49
0
04 Jun 2021
BR-NPA: A Non-Parametric High-Resolution Attention Model to improve the Interpretability of Attention
T. Gomez
Suiyi Ling
Thomas Fréour
Harold Mouchère
29
5
0
04 Jun 2021
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
Giang Nguyen
Daeyoung Kim
Anh Totti Nguyen
FAtt
21
86
0
31 May 2021
EDDA: Explanation-driven Data Augmentation to Improve Explanation Faithfulness
Ruiwen Li
Zhibo Zhang
Jiani Li
C. Trabelsi
Scott Sanner
Jongseong Jang
Yeonjeong Jeong
Dongsub Shim
AAML
13
1
0
29 May 2021
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
29
140
0
17 May 2021
Abstraction, Validation, and Generalization for Explainable Artificial Intelligence
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
21
5
0
16 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
29
183
0
15 May 2021
What's wrong with this video? Comparing Explainers for Deepfake Detection
Samuele Pino
Mark J. Carman
Paolo Bestagini
AAML
20
7
0
12 May 2021
LFI-CAM: Learning Feature Importance for Better Visual Explanation
Kwang Hee Lee
Chaewon Park
J. Oh
Nojun Kwak
FAtt
32
27
0
03 May 2021
Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
Julia Rosenzweig
Joachim Sicking
Sebastian Houben
Michael Mock
Maram Akila
AAML
29
3
0
22 Apr 2021
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis
Samuele Poppi
Marcella Cornia
Lorenzo Baraldi
Rita Cucchiara
FAtt
131
33
0
20 Apr 2021
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
24
6
0
19 Apr 2021
SurvNAM: The machine learning survival model explanation
Lev V. Utkin
Egor D. Satyukov
A. Konstantinov
AAML
FAtt
44
28
0
18 Apr 2021
A-FMI: Learning Attributions from Deep Networks via Feature Map Importance
An Zhang
Xiang Wang
Chengfang Fang
Jie Shi
Tat-Seng Chua
Zehua Chen
FAtt
26
3
0
12 Apr 2021
Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features
Ashkan Khakzar
Yang Zhang
W. Mansour
Yuezhi Cai
Yawei Li
Yucheng Zhang
Seong Tae Kim
Nassir Navab
FAtt
52
17
0
01 Apr 2021
NetAdaptV2: Efficient Neural Architecture Search with Fast Super-Network Training and Architecture Optimization
Tien-Ju Yang
Yi-Lun Liao
Vivienne Sze
33
55
0
31 Mar 2021
Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models
Arijit Ray
Michael Cogswell
Xiaoyu Lin
Kamran Alipour
Ajay Divakaran
Yi Yao
Giedrius Burachas
FAtt
11
4
0
26 Mar 2021
Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation
Dohun Lim
Hyeonseok Lee
Sungchan Kim
FAtt
AAML
28
13
0
26 Mar 2021
Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation
Yi Sun
Abel N. Valente
Sijia Liu
Dakuo Wang
AAML
8
7
0
25 Mar 2021
Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks
Qing-Long Zhang
Lu Rao
Yubin Yang
16
58
0
25 Mar 2021
Extracting Causal Visual Features for Limited label Classification
Mohit Prabhushankar
Ghassan AlRegib
CML
27
21
0
23 Mar 2021
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAML
FaML
XAI
HAI
23
317
0
19 Mar 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
28
75
0
18 Mar 2021
Explanations for Occluded Images
Hana Chockler
Daniel Kroening
Youcheng Sun
22
21
0
05 Mar 2021
Ensembles of Random SHAPs
Lev V. Utkin
A. Konstantinov
FAtt
16
20
0
04 Mar 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
33
221
0
25 Feb 2021
Believe The HiPe: Hierarchical Perturbation for Fast, Robust, and Model-Agnostic Saliency Mapping
Jessica Cooper
Ognjen Arandjelovic
David J. Harrison
AAML
14
13
0
22 Feb 2021
Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep Convolutional Networks via Integrated Gradient-Based Scoring
S. Sattarzadeh
M. Sudhakar
Konstantinos N. Plataniotis
Jongseong Jang
Yeonjeong Jeong
Hyunwoo J. Kim
FAtt
14
37
0
15 Feb 2021
Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of Convolutional Neural Networks
M. Sudhakar
S. Sattarzadeh
Konstantinos N. Plataniotis
Jongseong Jang
Yeonjeong Jeong
Hyunwoo J. Kim
AAML
11
10
0
15 Feb 2021
Towards Better Explanations of Class Activation Mapping
Hyungsik Jung
Youngrock Oh
FAtt
6
71
0
10 Feb 2021
Mitigating belief projection in explainable artificial intelligence via Bayesian Teaching
Scott Cheng-Hsin Yang
Wai Keen Vong
Ravi B. Sojitra
Tomas Folke
Patrick Shafto
16
42
0
07 Feb 2021
Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency Map Comparison
Lukas Brunke
Prateek Agrawal
Nikhil George
AAML
FAtt
22
13
0
26 Jan 2021
Visual explanation of black-box model: Similarity Difference and Uniqueness (SIDU) method
Satya M. Muddamsetty
M. N. Jahromi
Andreea-Emilia Ciontos
Laura M. Fenoy
T. Moeslund
AAML
40
26
0
26 Jan 2021
Benchmarking Perturbation-based Saliency Maps for Explaining Atari Agents
Tobias Huber
Benedikt Limmer
Elisabeth André
FAtt
20
14
0
18 Jan 2021
Generating Attribution Maps with Disentangled Masked Backpropagation
Adria Ruiz
Antonio Agudo
Francesc Moreno
FAtt
14
1
0
17 Jan 2021
Explaining the Black-box Smoothly- A Counterfactual Approach
Junyu Chen
Yong Du
Yufan He
W. Paul Segars
Ye Li
MedIm
FAtt
65
100
0
11 Jan 2021
iGOS++: Integrated Gradient Optimized Saliency by Bilateral Perturbations
Saeed Khorram
T. Lawson
Fuxin Li
AAML
FAtt
11
26
0
31 Dec 2020
Quantitative Evaluations on Saliency Methods: An Experimental Study
Xiao-hui Li
Yuhan Shi
Haoyang Li
Wei Bai
Yuanwei Song
Caleb Chen Cao
Lei Chen
FAtt
XAI
42
18
0
31 Dec 2020
Enhanced Regularizers for Attributional Robustness
A. Sarkar
Anirban Sarkar
V. Balasubramanian
24
16
0
28 Dec 2020
AdjointBackMap: Reconstructing Effective Decision Hypersurfaces from CNN Layers Using Adjoint Operators
Qing Wan
Yoonsuck Choe
23
1
0
16 Dec 2020
Interpreting Deep Neural Networks with Relative Sectional Propagation by Analyzing Comparative Gradients and Hostile Activations
Woo-Jeoung Nam
Jaesik Choi
Seong-Whan Lee
FAtt
AAML
22
14
0
07 Dec 2020
Previous
1
2
3
...
10
11
12
13
14
Next