ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07421
  4. Cited By
RISE: Randomized Input Sampling for Explanation of Black-box Models

RISE: Randomized Input Sampling for Explanation of Black-box Models

19 June 2018
Vitali Petsiuk
Abir Das
Kate Saenko
    FAtt
ArXivPDFHTML

Papers citing "RISE: Randomized Input Sampling for Explanation of Black-box Models"

50 / 653 papers shown
Title
SESS: Saliency Enhancing with Scaling and Sliding
SESS: Saliency Enhancing with Scaling and Sliding
Osman Tursun
Simon Denman
Sridha Sridharan
Clinton Fookes
11
5
0
05 Jul 2022
Fidelity of Ensemble Aggregation for Saliency Map Explanations using
  Bayesian Optimization Techniques
Fidelity of Ensemble Aggregation for Saliency Map Explanations using Bayesian Optimization Techniques
Yannik Mahlau
Christian Nolde
FAtt
40
0
0
04 Jul 2022
Analyzing Explainer Robustness via Probabilistic Lipschitzness of
  Prediction Functions
Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions
Zulqarnain Khan
Davin Hill
A. Masoomi
Joshua Bone
Jennifer Dy
AAML
41
3
0
24 Jun 2022
Explanation-based Counterfactual Retraining(XCR): A Calibration Method
  for Black-box Models
Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models
Liu Zhendong
Wenyu Jiang
Yan Zhang
Chongjun Wang
CML
11
0
0
22 Jun 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
29
141
0
22 Jun 2022
Visualizing and Understanding Contrastive Learning
Visualizing and Understanding Contrastive Learning
Fawaz Sammani
Boris Joukovsky
Nikos Deligiannis
SSL
FAtt
20
9
0
20 Jun 2022
FD-CAM: Improving Faithfulness and Discriminability of Visual
  Explanation for CNNs
FD-CAM: Improving Faithfulness and Discriminability of Visual Explanation for CNNs
Hui Li
Zihao Li
Rui Ma
Tieru Wu
FAtt
36
8
0
17 Jun 2022
ELUDE: Generating interpretable explanations via a decomposition into
  labelled and unlabelled features
ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
V. V. Ramaswamy
Sunnie S. Y. Kim
Nicole Meister
Ruth C. Fong
Olga Russakovsky
FAtt
29
14
0
15 Jun 2022
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal
  Transport Perspective
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective
M. Serrurier
Franck Mamalet
Thomas Fel
Louis Bethune
Thibaut Boissin
AAML
FAtt
32
4
0
14 Jun 2022
Making Sense of Dependence: Efficient Black-box Explanations Using
  Dependence Measure
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure
Paul Novello
Thomas Fel
David Vigouroux
FAtt
19
28
0
13 Jun 2022
Learning to Estimate Shapley Values with Vision Transformers
Learning to Estimate Shapley Values with Vision Transformers
Ian Covert
Chanwoo Kim
Su-In Lee
FAtt
25
35
0
10 Jun 2022
Xplique: A Deep Learning Explainability Toolbox
Xplique: A Deep Learning Explainability Toolbox
Thomas Fel
Lucas Hervier
David Vigouroux
Antonin Poché
Justin Plakoo
...
Agustin Picard
C. Nicodeme
Laurent Gardes
G. Flandin
Thomas Serre
24
30
0
09 Jun 2022
Spatial-temporal Concept based Explanation of 3D ConvNets
Spatial-temporal Concept based Explanation of 3D ConvNets
Yi Ji
Yu Wang
K. Mori
Jien Kato
3DPC
FAtt
29
7
0
09 Jun 2022
Large Loss Matters in Weakly Supervised Multi-Label Classification
Large Loss Matters in Weakly Supervised Multi-Label Classification
Youngwook Kim
Jae Myung Kim
Zeynep Akata
Jungwook Lee
NoLa
34
47
0
08 Jun 2022
Explainable Artificial Intelligence (XAI) for Internet of Things: A
  Survey
Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey
İbrahim Kök
Feyza Yıldırım Okay
Özgecan Muyanlı
S. Özdemir
XAI
35
51
0
07 Jun 2022
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Angie Boggust
Harini Suresh
Hendrik Strobelt
John Guttag
Arvindmani Satyanarayan
FAtt
XAI
30
8
0
07 Jun 2022
Which Explanation Should I Choose? A Function Approximation Perspective
  to Characterizing Post Hoc Explanations
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations
Tessa Han
Suraj Srinivas
Himabindu Lakkaraju
FAtt
50
86
0
02 Jun 2022
On the Eigenvalues of Global Covariance Pooling for Fine-grained Visual
  Recognition
On the Eigenvalues of Global Covariance Pooling for Fine-grained Visual Recognition
Yue Song
N. Sebe
Wei Wang
38
33
0
26 May 2022
How explainable are adversarially-robust CNNs?
How explainable are adversarially-robust CNNs?
Mehdi Nourelahi
Lars Kotthoff
Peijie Chen
Anh Totti Nguyen
AAML
FAtt
24
8
0
25 May 2022
Deletion and Insertion Tests in Regression Models
Deletion and Insertion Tests in Regression Models
Naofumi Hama
Masayoshi Mase
Art B. Owen
27
8
0
25 May 2022
An interpretation of the final fully connected layer
An interpretation of the final fully connected layer
Siddhartha
35
0
0
24 May 2022
Faithful Explanations for Deep Graph Models
Faithful Explanations for Deep Graph Models
Zifan Wang
Yuhang Yao
Chaoran Zhang
Han Zhang
Youjie Kang
Carlee Joe-Wong
Matt Fredrikson
Anupam Datta
FAtt
24
2
0
24 May 2022
What You See is What You Classify: Black Box Attributions
What You See is What You Classify: Black Box Attributions
Steven Stalder
Nathanael Perraudin
R. Achanta
Fernando Perez-Cruz
Michele Volpi
FAtt
34
9
0
23 May 2022
Learnable Visual Words for Interpretable Image Recognition
Learnable Visual Words for Interpretable Image Recognition
Wenxi Xiao
Zhengming Ding
Hongfu Liu
VLM
25
2
0
22 May 2022
Towards Better Understanding Attribution Methods
Towards Better Understanding Attribution Methods
Sukrut Rao
Moritz Bohle
Bernt Schiele
XAI
26
32
0
20 May 2022
B-cos Networks: Alignment is All We Need for Interpretability
B-cos Networks: Alignment is All We Need for Interpretability
Moritz D Boehle
Mario Fritz
Bernt Schiele
48
85
0
20 May 2022
The Solvability of Interpretability Evaluation Metrics
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
78
8
0
18 May 2022
A Psychological Theory of Explainability
A Psychological Theory of Explainability
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
XAI
FAtt
61
16
0
17 May 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aïvodji
Stephen H. Bach
Himabindu Lakkaraju
53
57
0
15 May 2022
Comparison of attention models and post-hoc explanation methods for
  embryo stage identification: a case study
Comparison of attention models and post-hoc explanation methods for embryo stage identification: a case study
T. Gomez
Thomas Fréour
Harold Mouchère
27
3
0
13 May 2022
Explainable Deep Learning Methods in Medical Image Classification: A
  Survey
Explainable Deep Learning Methods in Medical Image Classification: A Survey
Cristiano Patrício
João C. Neves
Luís F. Teixeira
XAI
24
53
0
10 May 2022
Poly-CAM: High resolution class activation map for convolutional neural
  networks
Poly-CAM: High resolution class activation map for convolutional neural networks
A. Englebert
O. Cornu
Christophe De Vleeschouwer
27
10
0
28 Apr 2022
Perception Visualization: Seeing Through the Eyes of a DNN
Perception Visualization: Seeing Through the Eyes of a DNN
Loris Giulivi
Mark J. Carman
Giacomo Boracchi
18
6
0
21 Apr 2022
Learning Compositional Representations for Effective Low-Shot
  Generalization
Learning Compositional Representations for Effective Low-Shot Generalization
Samarth Mishra
Pengkai Zhu
Venkatesh Saligrama
OCL
27
3
0
17 Apr 2022
OccAM's Laser: Occlusion-based Attribution Maps for 3D Object Detectors
  on LiDAR Data
OccAM's Laser: Occlusion-based Attribution Maps for 3D Object Detectors on LiDAR Data
David Schinagl
Georg Krispel
Horst Possegger
P. Roth
Horst Bischof
3DPC
34
16
0
13 Apr 2022
Maximum Entropy Baseline for Integrated Gradients
Maximum Entropy Baseline for Integrated Gradients
Hanxiao Tan
FAtt
24
4
0
12 Apr 2022
Reliable Visualization for Deep Speaker Recognition
Reliable Visualization for Deep Speaker Recognition
Pengqi Li
Lantian Li
A. Hamdulla
Dong Wang
HAI
40
9
0
08 Apr 2022
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
Leonard Salewski
A. Sophia Koepke
Hendrik P. A. Lensch
Zeynep Akata
LRM
NAI
33
20
0
05 Apr 2022
Simulator-based explanation and debugging of hazard-triggering events in
  DNN-based safety-critical systems
Simulator-based explanation and debugging of hazard-triggering events in DNN-based safety-critical systems
Hazem M. Fahmy
F. Pastore
Lionel C. Briand
Thomas Stifter
AAML
23
15
0
01 Apr 2022
Diffusion Models for Counterfactual Explanations
Diffusion Models for Counterfactual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
35
55
0
29 Mar 2022
Cycle-Consistent Counterfactuals by Latent Transformations
Cycle-Consistent Counterfactuals by Latent Transformations
Saeed Khorram
Li Fuxin
BDL
24
32
0
28 Mar 2022
A Unified Study of Machine Learning Explanation Evaluation Metrics
A Unified Study of Machine Learning Explanation Evaluation Metrics
Yipei Wang
Xiaoqian Wang
XAI
19
7
0
27 Mar 2022
HINT: Hierarchical Neuron Concept Explainer
HINT: Hierarchical Neuron Concept Explainer
Andong Wang
Wei-Ning Lee
Xiaojuan Qi
17
19
0
27 Mar 2022
Making Heads or Tails: Towards Semantically Consistent Visual
  Counterfactuals
Making Heads or Tails: Towards Semantically Consistent Visual Counterfactuals
Simon Vandenhende
D. Mahajan
Filip Radenovic
Deepti Ghadiyaram
27
30
0
24 Mar 2022
Explaining Classifiers by Constructing Familiar Concepts
Explaining Classifiers by Constructing Familiar Concepts
Johannes Schneider
M. Vlachos
29
15
0
07 Mar 2022
Human-Centered Concept Explanations for Neural Networks
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
42
26
0
25 Feb 2022
A Rigorous Study of Integrated Gradients Method and Extensions to
  Internal Neuron Attributions
A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions
Daniel Lundstrom
Tianjian Huang
Meisam Razaviyayn
FAtt
30
64
0
24 Feb 2022
First is Better Than Last for Language Data Influence
First is Better Than Last for Language Data Influence
Chih-Kuan Yeh
Ankur Taly
Mukund Sundararajan
Frederick Liu
Pradeep Ravikumar
TDI
34
20
0
24 Feb 2022
Explanatory Paradigms in Neural Networks
Explanatory Paradigms in Neural Networks
Ghassan AlRegib
Mohit Prabhushankar
FAtt
XAI
19
12
0
24 Feb 2022
Evaluating Feature Attribution Methods in the Image Domain
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
27
22
0
22 Feb 2022
Previous
123...8910...121314
Next