ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07421
  4. Cited By
RISE: Randomized Input Sampling for Explanation of Black-box Models

RISE: Randomized Input Sampling for Explanation of Black-box Models

19 June 2018
Vitali Petsiuk
Abir Das
Kate Saenko
    FAtt
ArXivPDFHTML

Papers citing "RISE: Randomized Input Sampling for Explanation of Black-box Models"

50 / 653 papers shown
Title
Attribution-based XAI Methods in Computer Vision: A Review
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
35
18
0
27 Nov 2022
Explaining Image Classifiers with Multiscale Directional Image
  Representation
Explaining Image Classifiers with Multiscale Directional Image Representation
Stefan Kolek
Robert Windesheim
Héctor Andrade-Loarca
Gitta Kutyniok
Ron Levie
29
4
0
22 Nov 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
103
0
17 Nov 2022
Model free variable importance for high dimensional data
Model free variable importance for high dimensional data
Naofumi Hama
Masayoshi Mase
Art B. Owen
29
1
0
15 Nov 2022
Identifying Spurious Correlations and Correcting them with an
  Explanation-based Learning
Identifying Spurious Correlations and Correcting them with an Explanation-based Learning
Misgina Tsighe Hagos
Kathleen M. Curran
Brian Mac Namee
26
10
0
15 Nov 2022
Explaining Cross-Domain Recognition with Interpretable Deep Classifier
Explaining Cross-Domain Recognition with Interpretable Deep Classifier
Yiheng Zhang
Ting Yao
Zhaofan Qiu
Tao Mei
OOD
37
3
0
15 Nov 2022
A Survey on Explainable Reinforcement Learning: Concepts, Algorithms,
  Challenges
A Survey on Explainable Reinforcement Learning: Concepts, Algorithms, Challenges
Yunpeng Qing
Shunyu Liu
Mingli Song
Huiqiong Wang
Mingli Song
XAI
33
1
0
12 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
41
18
0
10 Nov 2022
Generative Adversarial Networks for Weakly Supervised Generation and
  Evaluation of Brain Tumor Segmentations on MR Images
Generative Adversarial Networks for Weakly Supervised Generation and Evaluation of Brain Tumor Segmentations on MR Images
Jayeon Yoo
Khashayar Namdar
Matthias W. Wagner
L. Nobre
U. Tabori
C. Hawkins
B. Ertl-Wagner
Farzad Khalvati
GAN
MedIm
21
0
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
34
4
0
09 Nov 2022
Deep Learning based Computer Vision Methods for Complex Traffic
  Environments Perception: A Review
Deep Learning based Computer Vision Methods for Complex Traffic Environments Perception: A Review
Talha Azfar
Jinlong Li
Hongkai Yu
R. Cheu
Yisheng Lv
Ruimin Ke
38
21
0
09 Nov 2022
ViT-CX: Causal Explanation of Vision Transformers
ViT-CX: Causal Explanation of Vision Transformers
Weiyan Xie
Xiao-hui Li
Caleb Chen Cao
Nevin L.Zhang
ViT
37
17
0
06 Nov 2022
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic,
  Complete and Sound
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
Arushi Gupta
Nikunj Saunshi
Dingli Yu
Kaifeng Lyu
Sanjeev Arora
AAML
FAtt
XAI
31
5
0
05 Nov 2022
Exploring Explainability Methods for Graph Neural Networks
Exploring Explainability Methods for Graph Neural Networks
Harsh Patel
Shivam Sahni
14
0
0
03 Nov 2022
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for
  Image- and Video-Classification Models
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for Image- and Video-Classification Models
Atsushi Kikuchi
Kotaro Uchida
Masaki Waga
Kohei Suenaga
FAtt
26
1
0
31 Oct 2022
Multi-Viewpoint and Multi-Evaluation with Felicitous Inductive Bias
  Boost Machine Abstract Reasoning Ability
Multi-Viewpoint and Multi-Evaluation with Felicitous Inductive Bias Boost Machine Abstract Reasoning Ability
Qinglai Wei
Diancheng Chen
Beiming Yuan
32
10
0
26 Oct 2022
PlanT: Explainable Planning Transformers via Object-Level
  Representations
PlanT: Explainable Planning Transformers via Object-Level Representations
Katrin Renz
Kashyap Chitta
Otniel-Bogdan Mercea
A. Sophia Koepke
Zeynep Akata
Andreas Geiger
ViT
38
94
0
25 Oct 2022
ATCON: Attention Consistency for Vision Models
ATCON: Attention Consistency for Vision Models
Ali Mirzazadeh
Florian Dubost
M. Pike
Krish Maniar
Max Zuo
Christopher Lee-Messer
D. Rubin
13
1
0
18 Oct 2022
Class-Specific Explainability for Deep Time Series Classifiers
Class-Specific Explainability for Deep Time Series Classifiers
Ramesh Doddaiah
Prathyush S. Parvatharaju
Elke A. Rundensteiner
Thomas Hartvigsen
FAtt
AI4TS
37
4
0
11 Oct 2022
Improving Data-Efficient Fossil Segmentation via Model Editing
Improving Data-Efficient Fossil Segmentation via Model Editing
Indu Panigrahi
Ryan Manzuk
A. Maloof
Ruth C. Fong
35
1
0
08 Oct 2022
Quantitative Metrics for Evaluating Explanations of Video DeepFake
  Detectors
Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors
Federico Baldassarre
Quentin Debard
Gonzalo Fiz Pontiveros
Tri Kurniawan Wijaya
44
4
0
07 Oct 2022
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
43
108
0
02 Oct 2022
Contrastive Corpus Attribution for Explaining Representations
Contrastive Corpus Attribution for Explaining Representations
Christy Lin
Hugh Chen
Chanwoo Kim
Su-In Lee
SSL
27
8
0
30 Sep 2022
Evaluation of importance estimators in deep learning classifiers for
  Computed Tomography
Evaluation of importance estimators in deep learning classifiers for Computed Tomography
L. Brocki
Wistan Marchadour
Jonas Maison
B. Badic
P. Papadimitroulas
M. Hatt
Franck Vermet
N. C. Chung
27
4
0
30 Sep 2022
Verifiable and Energy Efficient Medical Image Analysis with Quantised
  Self-attentive Deep Neural Networks
Verifiable and Energy Efficient Medical Image Analysis with Quantised Self-attentive Deep Neural Networks
Rakshith Sathish
S. Khare
Debdoot Sheet
42
4
0
30 Sep 2022
Recipro-CAM: Fast gradient-free visual explanations for convolutional
  neural networks
Recipro-CAM: Fast gradient-free visual explanations for convolutional neural networks
Seokhyun Byun
Won-Jo Lee
FAtt
39
4
0
28 Sep 2022
WeightedSHAP: analyzing and improving Shapley based feature attributions
WeightedSHAP: analyzing and improving Shapley based feature attributions
Yongchan Kwon
James Zou
TDI
FAtt
29
33
0
27 Sep 2022
Ablation Path Saliency
Ablation Path Saliency
Justus Sagemüller
Olivier Verdier
FAtt
AAML
21
0
0
26 Sep 2022
Learning Visual Explanations for DCNN-Based Image Classifiers Using an
  Attention Mechanism
Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism
Ioanna Gkartzonika
Nikolaos Gkalelis
Vasileios Mezaris
36
9
0
22 Sep 2022
Deep Superpixel Generation and Clustering for Weakly Supervised
  Segmentation of Brain Tumors in MR Images
Deep Superpixel Generation and Clustering for Weakly Supervised Segmentation of Brain Tumors in MR Images
Jayeon Yoo
Khashayar Namdar
Farzad Khalvati
21
2
0
20 Sep 2022
A model-agnostic approach for generating Saliency Maps to explain
  inferred decisions of Deep Learning Models
A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models
S. Karatsiolis
A. Kamilaris
FAtt
34
1
0
19 Sep 2022
Look where you look! Saliency-guided Q-networks for generalization in
  visual Reinforcement Learning
Look where you look! Saliency-guided Q-networks for generalization in visual Reinforcement Learning
David Bertoin
Adil Zouitine
Mehdi Zouitine
Emmanuel Rachelson
40
30
0
16 Sep 2022
"Is your explanation stable?": A Robustness Evaluation Framework for
  Feature Attribution
"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution
Yuyou Gan
Yuhao Mao
Xuhong Zhang
S. Ji
Yuwen Pu
Meng Han
Jianwei Yin
Ting Wang
FAtt
AAML
12
15
0
05 Sep 2022
Generating detailed saliency maps using model-agnostic methods
Generating detailed saliency maps using model-agnostic methods
Maciej Sakowicz
FAtt
23
0
0
04 Sep 2022
Concept Gradient: Concept-based Interpretation Without Linear Assumption
Concept Gradient: Concept-based Interpretation Without Linear Assumption
Andrew Bai
Chih-Kuan Yeh
Pradeep Ravikumar
Neil Y. C. Lin
Cho-Jui Hsieh
30
15
0
31 Aug 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
43
12
0
19 Aug 2022
Visual Explanation of Deep Q-Network for Robot Navigation by Fine-tuning
  Attention Branch
Visual Explanation of Deep Q-Network for Robot Navigation by Fine-tuning Attention Branch
Yuya Maruyama
Hiroshi Fukui
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
K. Sugiura
31
1
0
18 Aug 2022
The Weighting Game: Evaluating Quality of Explainability Methods
The Weighting Game: Evaluating Quality of Explainability Methods
Lassi Raatikainen
Esa Rahtu
FAtt
XAI
34
4
0
12 Aug 2022
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
  Shapley Value
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on Shapley Value
Quan Zheng
Ziwei Wang
Jie Zhou
Jiwen Lu
FAtt
33
31
0
07 Aug 2022
Explaining Deep Neural Networks for Point Clouds using Gradient-based
  Visualisations
Explaining Deep Neural Networks for Point Clouds using Gradient-based Visualisations
Jawad Tayyub
M. Sarmad
Nicolas Schonborn
3DPC
FAtt
6
5
0
26 Jul 2022
Adaptive occlusion sensitivity analysis for visually explaining video
  recognition networks
Adaptive occlusion sensitivity analysis for visually explaining video recognition networks
Tomoki Uchiyama
Naoya Sogi
S. Iizuka
Koichiro Niinuma
Kazuhiro Fukui
24
2
0
26 Jul 2022
Overlooked factors in concept-based explanations: Dataset choice,
  concept learnability, and human capability
Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
FAtt
37
28
0
20 Jul 2022
MDM: Multiple Dynamic Masks for Visual Explanation of Neural Networks
MDM: Multiple Dynamic Masks for Visual Explanation of Neural Networks
Yitao Peng
Longzhen Yang
Yihang Liu
Lianghua He
19
0
0
17 Jul 2022
Anomalous behaviour in loss-gradient based interpretability methods
Anomalous behaviour in loss-gradient based interpretability methods
Vinod Subramanian
Francesco Ferroni
Emmanouil Benetos
Mark Sandler
13
0
0
15 Jul 2022
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current
  Methods, Challenges, and Opportunities
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities
Subash Neupane
Jesse Ables
William Anderson
Sudip Mittal
Shahram Rahimi
I. Banicescu
Maria Seale
AAML
56
71
0
13 Jul 2022
Rethinking gradient weights' influence over saliency map estimation
Rethinking gradient weights' influence over saliency map estimation
Masud An Nur Islam Fahim
Nazmus Saqib
Shafkat Khan Siam
H. Jung
FAtt
24
1
0
12 Jul 2022
A clinically motivated self-supervised approach for content-based image
  retrieval of CT liver images
A clinically motivated self-supervised approach for content-based image retrieval of CT liver images
Kristoffer Wickstrøm
Eirik Agnalt Ostmo
Keyur Radiya
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
Robert Jenssen
SSL
37
13
0
11 Jul 2022
Abs-CAM: A Gradient Optimization Interpretable Approach for Explanation
  of Convolutional Neural Networks
Abs-CAM: A Gradient Optimization Interpretable Approach for Explanation of Convolutional Neural Networks
Chunyan Zeng
Kang Yan
Zhifeng Wang
Yan Yu
Shiyan Xia
Nan Zhao
FAtt
11
39
0
08 Jul 2022
Calibrate to Interpret
Calibrate to Interpret
Gregory Scafarto
N. Posocco
Antoine Bonnefoy
FaML
16
3
0
07 Jul 2022
An Additive Instance-Wise Approach to Multi-class Model Interpretation
An Additive Instance-Wise Approach to Multi-class Model Interpretation
Vy Vo
Van Nguyen
Trung Le
Quan Hung Tran
Gholamreza Haffari
S. Çamtepe
Dinh Q. Phung
FAtt
48
5
0
07 Jul 2022
Previous
123...789...121314
Next