ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07421
  4. Cited By
RISE: Randomized Input Sampling for Explanation of Black-box Models

RISE: Randomized Input Sampling for Explanation of Black-box Models

19 June 2018
Vitali Petsiuk
Abir Das
Kate Saenko
    FAtt
ArXivPDFHTML

Papers citing "RISE: Randomized Input Sampling for Explanation of Black-box Models"

50 / 652 papers shown
Title
TimeREISE: Time-series Randomized Evolving Input Sample Explanation
TimeREISE: Time-series Randomized Evolving Input Sample Explanation
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
AI4TS
11
7
0
16 Feb 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified
  Perturbation Analysis
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
26
41
0
15 Feb 2022
Towards Best Practice of Interpreting Deep Learning Models for EEG-based
  Brain Computer Interfaces
Towards Best Practice of Interpreting Deep Learning Models for EEG-based Brain Computer Interfaces
Jian Cui
Liqiang Yuan
Zhaoxiang Wang
Ruilin Li
Tianzi Jiang
20
11
0
12 Feb 2022
Towards Disentangling Information Paths with Coded ResNeXt
Towards Disentangling Information Paths with Coded ResNeXt
Apostolos Avranas
Marios Kountouris
FAtt
30
1
0
10 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
186
186
0
03 Feb 2022
A Consistent and Efficient Evaluation Strategy for Attribution Methods
A Consistent and Efficient Evaluation Strategy for Attribution Methods
Yao Rong
Tobias Leemann
V. Borisov
Gjergji Kasneci
Enkelejda Kasneci
FAtt
23
92
0
01 Feb 2022
Metrics for saliency map evaluation of deep learning explanation methods
Metrics for saliency map evaluation of deep learning explanation methods
T. Gomez
Thomas Fréour
Harold Mouchère
XAI
FAtt
69
41
0
31 Jan 2022
LAP: An Attention-Based Module for Concept Based Self-Interpretation and
  Knowledge Injection in Convolutional Neural Networks
LAP: An Attention-Based Module for Concept Based Self-Interpretation and Knowledge Injection in Convolutional Neural Networks
Rassa Ghavami Modegh
Ahmadali Salimi
Alireza Dizaji
Hamid R. Rabiee
FAtt
32
0
0
27 Jan 2022
Deeply Explain CNN via Hierarchical Decomposition
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip Torr
FAtt
53
15
0
23 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
28
398
0
20 Jan 2022
A Cognitive Explainer for Fetal ultrasound images classifier Based on
  Medical Concepts
A Cognitive Explainer for Fetal ultrasound images classifier Based on Medical Concepts
Ying-Shuai Wanga
Yunxia Liua
Licong Dongc
Xuzhou Wua
Huabin Zhangb
Qiongyu Yed
Desheng Sunc
Xiaobo Zhoue
Kehong Yuan
27
0
0
19 Jan 2022
Black-box Safety Analysis and Retraining of DNNs based on Feature
  Extraction and Clustering
Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering
M. Attaoui
Hazem M. Fahmy
F. Pastore
Lionel C. Briand
AAML
24
21
0
13 Jan 2022
Explainable Artificial Intelligence Methods in Combating Pandemics: A
  Systematic Review
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
F. Giuste
Wenqi Shi
Yuanda Zhu
Tarun Naren
Monica Isgut
Ying Sha
L. Tong
Mitali S. Gupte
May D. Wang
31
73
0
23 Dec 2021
RELAX: Representation Learning Explainability
RELAX: Representation Learning Explainability
Kristoffer Wickstrøm
Daniel J. Trosten
Sigurd Løkse
Ahcène Boubekki
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
Robert Jenssen
FAtt
13
14
0
19 Dec 2021
Does Explainable Machine Learning Uncover the Black Box in Vision
  Applications?
Does Explainable Machine Learning Uncover the Black Box in Vision Applications?
Manish Narwaria
AAML
VLM
XAI
25
15
0
18 Dec 2021
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation
  Framework for Explainability Methods
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
Julien Colin
Thomas Fel
Rémi Cadène
Thomas Serre
33
101
0
06 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Localized Perturbations For Weakly-Supervised Segmentation of Glioma
  Brain Tumours
Localized Perturbations For Weakly-Supervised Segmentation of Glioma Brain Tumours
Sajith Rajapaksa
Farzad Khalvati
21
3
0
29 Nov 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
25
80
0
29 Nov 2021
Reinforcement Explanation Learning
Reinforcement Explanation Learning
Siddhant Agarwal
Owais Iqbal
Sree Aditya Buridi
Madda Manjusha
Abir Das
FAtt
21
0
0
26 Nov 2021
LIMEcraft: Handcrafted superpixel selection and inspection for Visual
  eXplanations
LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations
Weronika Hryniewska
Adrianna Grudzieñ
P. Biecek
FAtt
53
3
0
15 Nov 2021
A Robust Unsupervised Ensemble of Feature-Based Explanations using
  Restricted Boltzmann Machines
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines
V. Borisov
Johannes Meier
J. V. D. Heuvel
Hamed Jalali
Gjergji Kasneci
FAtt
41
5
0
14 Nov 2021
Self-Interpretable Model with TransformationEquivariant Interpretation
Self-Interpretable Model with TransformationEquivariant Interpretation
Yipei Wang
Xiaoqian Wang
38
23
0
09 Nov 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based
  Sensitivity Analysis
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
120
60
0
07 Nov 2021
Human Attention in Fine-grained Classification
Human Attention in Fine-grained Classification
Yao Rong
Wenjia Xu
Zeynep Akata
Enkelejda Kasneci
45
35
0
02 Nov 2021
Gradient Frequency Modulation for Visually Explaining Video
  Understanding Models
Gradient Frequency Modulation for Visually Explaining Video Understanding Models
Xinmiao Lin
Wentao Bao
Matthew Wright
Yu Kong
FAtt
AAML
30
2
0
01 Nov 2021
ST-ABN: Visual Explanation Taking into Account Spatio-temporal
  Information for Video Recognition
ST-ABN: Visual Explanation Taking into Account Spatio-temporal Information for Video Recognition
Masahiro Mitsuhara
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
27
1
0
29 Oct 2021
ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable
  AI
ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AI
Samuel Hess
G. Ditzler
AAML
33
1
0
22 Oct 2021
TorchEsegeta: Framework for Interpretability and Explainability of
  Image-based Deep Learning Models
TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models
S. Chatterjee
Arnab Das
Chirag Mandal
Budhaditya Mukhopadhyay
Manish Vipinraj
Aniruddh Shukla
R. Rao
Chompunuch Sarasaen
Oliver Speck
A. Nürnberger
MedIm
42
14
0
16 Oct 2021
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
Lin Cheng
Pengfei Fang
Yanjie Liang
Liao Zhang
Chunhua Shen
Hanzi Wang
FAtt
27
11
0
11 Oct 2021
Cartoon Explanations of Image Classifiers
Cartoon Explanations of Image Classifiers
Stefan Kolek
Duc Anh Nguyen
Ron Levie
Joan Bruna
Gitta Kutyniok
FAtt
38
15
0
07 Oct 2021
Consistent Explanations by Contrastive Learning
Consistent Explanations by Contrastive Learning
Vipin Pillai
Soroush Abbasi Koohpayegani
Ashley Ouligian
Dennis Fong
Hamed Pirsiavash
FAtt
20
21
0
01 Oct 2021
XPROAX-Local explanations for text classification with progressive
  neighborhood approximation
XPROAX-Local explanations for text classification with progressive neighborhood approximation
Yi Cai
Arthur Zimek
Eirini Ntoutsi
25
5
0
30 Sep 2021
Optimising for Interpretability: Convolutional Dynamic Alignment
  Networks
Optimising for Interpretability: Convolutional Dynamic Alignment Networks
Moritz D Boehle
Mario Fritz
Bernt Schiele
14
2
0
27 Sep 2021
From Heatmaps to Structural Explanations of Image Classifiers
From Heatmaps to Structural Explanations of Image Classifiers
Li Fuxin
Zhongang Qi
Saeed Khorram
Vivswan Shitole
Prasad Tadepalli
Minsuk Kahng
Alan Fern
XAI
FAtt
23
4
0
13 Sep 2021
Logic Traps in Evaluating Attribution Scores
Logic Traps in Evaluating Attribution Scores
Yiming Ju
Yuanzhe Zhang
Zhao Yang
Zhongtao Jiang
Kang Liu
Jun Zhao
XAI
FAtt
33
18
0
12 Sep 2021
Deriving Explanation of Deep Visual Saliency Models
Deriving Explanation of Deep Visual Saliency Models
S. Malladi
J. Mukhopadhyay
M. Larabi
S. Chaudhury
FAtt
XAI
18
0
0
08 Sep 2021
Cross-Model Consensus of Explanations and Beyond for Image
  Classification Models: An Empirical Study
Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study
Xuhong Li
Haoyi Xiong
Siyu Huang
Shilei Ji
Dejing Dou
30
10
0
02 Sep 2021
Spatio-Temporal Perturbations for Video Attribution
Spatio-Temporal Perturbations for Video Attribution
Zhenqiang Li
Weimin Wang
Zuoyue Li
Yifei Huang
Yoichi Sato
19
6
0
01 Sep 2021
A Comparison of Deep Saliency Map Generators on Multispectral Data in
  Object Detection
A Comparison of Deep Saliency Map Generators on Multispectral Data in Object Detection
Jens Bayer
David Munch
Michael Arens
3DPC
30
3
0
26 Aug 2021
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual
  Features in Images
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual Features in Images
Jiafan Zhuang
Wanying Tao
Jianfei Xing
Wei Shi
Ruixuan Wang
Weishi Zheng
FAtt
45
3
0
25 Aug 2021
Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning
  Models
Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models
Zhenge Zhao
Panpan Xu
C. Scheidegger
Liu Ren
18
38
0
08 Aug 2021
Where do Models go Wrong? Parameter-Space Saliency Maps for
  Explainability
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Roman Levin
Manli Shu
Eitan Borgnia
Furong Huang
Micah Goldblum
Tom Goldstein
FAtt
AAML
28
10
0
03 Aug 2021
Finding Discriminative Filters for Specific Degradations in Blind
  Super-Resolution
Finding Discriminative Filters for Specific Degradations in Blind Super-Resolution
Liangbin Xie
Xintao Wang
Chao Dong
Zhongang Qi
Ying Shan
6
39
0
02 Aug 2021
Temporal Dependencies in Feature Importance for Time Series Predictions
Temporal Dependencies in Feature Importance for Time Series Predictions
Kin Kwan Leung
Clayton Rooke
Jonathan Smith
S. Zuberi
M. Volkovs
OOD
AI4TS
31
24
0
29 Jul 2021
Resisting Out-of-Distribution Data Problem in Perturbation of XAI
Resisting Out-of-Distribution Data Problem in Perturbation of XAI
Luyu Qiu
Yi Yang
Caleb Chen Cao
Jing Liu
Yueyuan Zheng
H. Ngai
J. H. Hsiao
Lei Chen
17
18
0
27 Jul 2021
Attribution of Predictive Uncertainties in Classification Models
Attribution of Predictive Uncertainties in Classification Models
Iker Perez
Piotr Skalski
Alec E. Barns-Graham
Jason Wong
David Sutton
UQCV
32
6
0
19 Jul 2021
FastSHAP: Real-Time Shapley Value Estimation
FastSHAP: Real-Time Shapley Value Estimation
N. Jethani
Mukund Sudarshan
Ian Covert
Su-In Lee
Rajesh Ranganath
TDI
FAtt
67
127
0
15 Jul 2021
A Review of Explainable Artificial Intelligence in Manufacturing
A Review of Explainable Artificial Intelligence in Manufacturing
G. Sofianidis
Jože M. Rožanec
Dunja Mladenić
D. Kyriazis
25
17
0
05 Jul 2021
Improving a neural network model by explanation-guided training for
  glioma classification based on MRI data
Improving a neural network model by explanation-guided training for glioma classification based on MRI data
Frantisek Sefcik
Wanda Benesova
19
12
0
05 Jul 2021
Previous
123...10111213149
Next