ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07421
  4. Cited By
RISE: Randomized Input Sampling for Explanation of Black-box Models

RISE: Randomized Input Sampling for Explanation of Black-box Models

19 June 2018
Vitali Petsiuk
Abir Das
Kate Saenko
    FAtt
ArXivPDFHTML

Papers citing "RISE: Randomized Input Sampling for Explanation of Black-box Models"

50 / 652 papers shown
Title
Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
  For Interpretable Vision Models
Proactive Pseudo-Intervention: Causally Informed Contrastive Learning For Interpretable Vision Models
Dong Wang
Yuewei Yang
Chenyang Tao
Zhe Gan
Liqun Chen
Fanjie Kong
Ricardo Henao
Lawrence Carin
37
0
0
06 Dec 2020
BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
Xingyu Zhao
Wei Huang
Xiaowei Huang
Valentin Robu
David Flynn
FAtt
31
92
0
05 Dec 2020
Achievements and Challenges in Explaining Deep Learning based
  Computer-Aided Diagnosis Systems
Achievements and Challenges in Explaining Deep Learning based Computer-Aided Diagnosis Systems
Adriano Lucieri
Muhammad Naseer Bajwa
Andreas Dengel
Sheraz Ahmed
41
10
0
26 Nov 2020
Match Them Up: Visually Explainable Few-shot Image Classification
Match Them Up: Visually Explainable Few-shot Image Classification
Bowen Wang
Liangzhi Li
Manisha Verma
Yuta Nakashima
R. Kawasaki
Hajime Nagahara
FAtt
11
25
0
25 Nov 2020
Play Fair: Frame Attributions in Video Models
Play Fair: Frame Attributions in Video Models
Will Price
Dima Damen
FAtt
28
5
0
24 Nov 2020
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
48
242
0
21 Nov 2020
Deep Interpretable Classification and Weakly-Supervised Segmentation of
  Histology Images via Max-Min Uncertainty
Deep Interpretable Classification and Weakly-Supervised Segmentation of Histology Images via Max-Min Uncertainty
Soufiane Belharbi
Jérôme Rony
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
19
52
0
14 Nov 2020
One Explanation is Not Enough: Structured Attention Graphs for Image
  Classification
One Explanation is Not Enough: Structured Attention Graphs for Image Classification
Vivswan Shitole
Li Fuxin
Minsuk Kahng
Prasad Tadepalli
Alan Fern
FAtt
GNN
22
38
0
13 Nov 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
33
33
0
06 Nov 2020
Attribution Preservation in Network Compression for Reliable Network
  Interpretation
Attribution Preservation in Network Compression for Reliable Network Interpretation
Geondo Park
J. Yang
Sung Ju Hwang
Eunho Yang
4
5
0
28 Oct 2020
Interpretation of NLP models through input marginalization
Interpretation of NLP models through input marginalization
Siwon Kim
Jihun Yi
Eunji Kim
Sungroh Yoon
MILM
FAtt
22
58
0
27 Oct 2020
Benchmarking Deep Learning Interpretability in Time Series Predictions
Benchmarking Deep Learning Interpretability in Time Series Predictions
Aya Abdelsalam Ismail
Mohamed K. Gunady
H. C. Bravo
S. Feizi
XAI
AI4TS
FAtt
22
168
0
26 Oct 2020
Investigating Saturation Effects in Integrated Gradients
Investigating Saturation Effects in Integrated Gradients
Vivek Miglani
Narine Kokhlikyan
B. Alsallakh
Miguel Martin
Orion Reblitz-Richardson
FAtt
21
23
0
23 Oct 2020
Investigating and Simplifying Masking-based Saliency Methods for Model
  Interpretability
Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability
Jason Phang
Jungkyu Park
Krzysztof J. Geras
FAtt
AAML
224
7
0
19 Oct 2020
Poisoned classifiers are not only backdoored, they are fundamentally
  broken
Poisoned classifiers are not only backdoored, they are fundamentally broken
Mingjie Sun
Siddhant Agarwal
J. Zico Kolter
8
25
0
18 Oct 2020
Zoom-CAM: Generating Fine-grained Pixel Annotations from Image Labels
Zoom-CAM: Generating Fine-grained Pixel Annotations from Image Labels
Xiangwei Shi
Seyran Khademi
Yun-qiang Li
Jan van Gemert
VLM
WSOL
43
19
0
16 Oct 2020
Interpretable Machine Learning with an Ensemble of Gradient Boosting
  Machines
Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines
A. Konstantinov
Lev V. Utkin
FedML
AI4CE
10
138
0
14 Oct 2020
Learning Propagation Rules for Attribution Map Generation
Learning Propagation Rules for Attribution Map Generation
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
FAtt
38
17
0
14 Oct 2020
IS-CAM: Integrated Score-CAM for axiomatic-based explanations
IS-CAM: Integrated Score-CAM for axiomatic-based explanations
Rakshit Naidu
Ankita Ghosh
Yash Maurya
K. ShamanthRNayak
Soumya Snigdha Kundu
FAtt
16
46
0
06 Oct 2020
Visualizing Color-wise Saliency of Black-Box Image Classification Models
Visualizing Color-wise Saliency of Black-Box Image Classification Models
Yuhki Hatakeyama
Hiroki Sakuma
Yoshinori Konishi
Kohei Suenaga
FAtt
22
3
0
06 Oct 2020
Remembering for the Right Reasons: Explanations Reduce Catastrophic
  Forgetting
Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting
Sayna Ebrahimi
Suzanne Petryk
Akash Gokul
William Gan
Joseph E. Gonzalez
Marcus Rohrbach
Trevor Darrell
CLL
37
45
0
04 Oct 2020
Explaining Convolutional Neural Networks through Attribution-Based Input
  Sampling and Block-Wise Feature Aggregation
Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation
S. Sattarzadeh
M. Sudhakar
Anthony Lem
Shervin Mehryar
K. N. Plataniotis
Jongseong Jang
Hyunwoo J. Kim
Yeonjeong Jeong
Sang-Min Lee
Kyunghoon Bae
FAtt
XAI
6
32
0
01 Oct 2020
Where is the Model Looking At?--Concentrate and Explain the Network
  Attention
Where is the Model Looking At?--Concentrate and Explain the Network Attention
Wenjia Xu
Jiuniu Wang
Yang Wang
Guangluan Xu
Wei Dai
Yirong Wu
XAI
29
17
0
29 Sep 2020
Information-Theoretic Visual Explanation for Black-Box Classifiers
Information-Theoretic Visual Explanation for Black-Box Classifiers
Jihun Yi
Eunji Kim
Siwon Kim
Sungroh Yoon
FAtt
20
6
0
23 Sep 2020
Contextual Semantic Interpretability
Contextual Semantic Interpretability
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
20
27
0
18 Sep 2020
SCOUTER: Slot Attention-based Classifier for Explainable Image
  Recognition
SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition
Liangzhi Li
Bowen Wang
Manisha Verma
Yuta Nakashima
R. Kawasaki
Hajime Nagahara
OCL
23
49
0
14 Sep 2020
Understanding the Role of Individual Units in a Deep Neural Network
Understanding the Role of Individual Units in a Deep Neural Network
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Àgata Lapedriza
Bolei Zhou
Antonio Torralba
GAN
12
433
0
10 Sep 2020
Region Comparison Network for Interpretable Few-shot Image
  Classification
Region Comparison Network for Interpretable Few-shot Image Classification
Z. Xue
Lixin Duan
Wen Li
Lin Chen
Jiebo Luo
22
15
0
08 Sep 2020
How Good is your Explanation? Algorithmic Stability Measures to Assess
  the Quality of Explanations for Deep Neural Networks
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks
Thomas Fel
David Vigouroux
Rémi Cadène
Thomas Serre
XAI
FAtt
34
31
0
07 Sep 2020
Explainable Artificial Intelligence for Process Mining: A General
  Overview and Application of a Novel Local Explanation Approach for Predictive
  Process Monitoring
Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring
Nijat Mehdiyev
Peter Fettke
AI4TS
25
55
0
04 Sep 2020
iCaps: An Interpretable Classifier via Disentangled Capsule Networks
iCaps: An Interpretable Classifier via Disentangled Capsule Networks
Dahuin Jung
Jonghyun Lee
Jihun Yi
Sungroh Yoon
28
12
0
20 Aug 2020
Mitigating Dataset Imbalance via Joint Generation and Classification
Mitigating Dataset Imbalance via Joint Generation and Classification
Aadarsh Sahoo
Ankit Singh
Yikang Shen
Rogerio Feris
Abir Das
GAN
12
9
0
12 Aug 2020
Explainable Face Recognition
Explainable Face Recognition
Jonathan R. Williford
Brandon B. May
J. Byrne
CVBM
16
71
0
03 Aug 2020
An Explainable Machine Learning Model for Early Detection of Parkinson's
  Disease using LIME on DaTscan Imagery
An Explainable Machine Learning Model for Early Detection of Parkinson's Disease using LIME on DaTscan Imagery
Pavan Rajkumar Magesh
Richard Delwin Myloth
Rijo Jackson Tom
FAtt
16
187
0
01 Aug 2020
Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
  Prediction
Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction
Eric Chu
D. Roy
Jacob Andreas
FAtt
LRM
13
71
0
23 Jul 2020
A Generic Visualization Approach for Convolutional Neural Networks
A Generic Visualization Approach for Convolutional Neural Networks
Ahmed Taha
Xitong Yang
Abhinav Shrivastava
L. Davis
28
8
0
19 Jul 2020
Scientific Discovery by Generating Counterfactuals using Image
  Translation
Scientific Discovery by Generating Counterfactuals using Image Translation
Arunachalam Narayanaswamy
Subhashini Venugopalan
D. Webster
L. Peng
G. Corrado
...
Abigail E. Huang
Siva Balasubramanian
Michael P. Brenner
Phil Q. Nelson
A. Varadarajan
DiffM
MedIm
24
20
0
10 Jul 2020
Counterfactual explanation of machine learning survival models
Counterfactual explanation of machine learning survival models
M. Kovalev
Lev V. Utkin
CML
OffRL
27
19
0
26 Jun 2020
Proper Network Interpretability Helps Adversarial Robustness in
  Classification
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
27
66
0
26 Jun 2020
SS-CAM: Smoothed Score-CAM for Sharper Visual Feature Localization
SS-CAM: Smoothed Score-CAM for Sharper Visual Feature Localization
Haofan Wang
Rakshit Naidu
J. Michael
Soumya Snigdha Kundu
FAtt
27
79
0
25 Jun 2020
A generalizable saliency map-based interpretation of model outcome
A generalizable saliency map-based interpretation of model outcome
Shailja Thakur
S. Fischmeister
AAML
FAtt
MILM
22
2
0
16 Jun 2020
Lung Segmentation and Nodule Detection in Computed Tomography Scan using
  a Convolutional Neural Network Trained Adversarially using Turing Test Loss
Lung Segmentation and Nodule Detection in Computed Tomography Scan using a Convolutional Neural Network Trained Adversarially using Turing Test Loss
Rakshith Sathish
R. Sathish
Ramanathan Sethuraman
Debdoot Sheet
8
6
0
16 Jun 2020
Opportunities and Challenges in Explainable Artificial Intelligence
  (XAI): A Survey
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
42
593
0
16 Jun 2020
SegNBDT: Visual Decision Rules for Segmentation
SegNBDT: Visual Decision Rules for Segmentation
Alvin Wan
Daniel Ho
You Song
Henk Tillman
Sarah Adel Bargal
Joseph E. Gonzalez
SSeg
9
6
0
11 Jun 2020
Black-box Explanation of Object Detectors via Saliency Maps
Black-box Explanation of Object Detectors via Saliency Maps
Vitali Petsiuk
R. Jain
Varun Manjunatha
Vlad I. Morariu
Ashutosh Mehra
Vicente Ordonez
Kate Saenko
FAtt
24
122
0
05 Jun 2020
SIDU: Similarity Difference and Uniqueness Method for Explainable AI
SIDU: Similarity Difference and Uniqueness Method for Explainable AI
Satya M. Muddamsetty
M. N. Jahromi
T. Moeslund
14
11
0
04 Jun 2020
MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model
  Explanations
MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model Explanations
Qing Yang
Xia Zhu
Jong-Kae Fwu
Yun Ye
Ganmei You
Yuan Zhu
AAML
9
20
0
04 Jun 2020
Evaluations and Methods for Explanation through Robustness Analysis
Evaluations and Methods for Explanation through Robustness Analysis
Cheng-Yu Hsieh
Chih-Kuan Yeh
Xuanqing Liu
Pradeep Ravikumar
Seungyeon Kim
Sanjiv Kumar
Cho-Jui Hsieh
XAI
15
58
0
31 May 2020
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Zixuan Huang
Yin Li
12
138
0
21 May 2020
The Skincare project, an interactive deep learning system for
  differential diagnosis of malignant skin lesions. Technical Report
The Skincare project, an interactive deep learning system for differential diagnosis of malignant skin lesions. Technical Report
Daniel Sonntag
Fabrizio Nunnari
H. Profitlich
15
12
0
19 May 2020
Previous
123...11121314
Next