ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.08049
  4. Cited By
On the Robustness of Interpretability Methods

On the Robustness of Interpretability Methods

21 June 2018
David Alvarez-Melis
Tommi Jaakkola
ArXivPDFHTML

Papers citing "On the Robustness of Interpretability Methods"

50 / 70 papers shown
Title
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Mahdi Dhaini
Ege Erdogan
Nils Feldhus
Gjergji Kasneci
46
0
0
02 May 2025
Axiomatic Explainer Globalness via Optimal Transport
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
100
1
0
13 Mar 2025
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
100
0
0
11 Feb 2025
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI
Christopher Burger
Charles Walter
Thai Le
AAML
146
1
0
20 Jan 2025
A Tale of Two Imperatives: Privacy and Explainability
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
94
0
0
30 Dec 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
61
1
0
01 Jul 2024
Stability of Explainable Recommendation
Stability of Explainable Recommendation
Sairamvinay Vijayaraghavan
Prasant Mohapatra
AAML
38
1
0
03 May 2024
Robust Explainable Recommendation
Robust Explainable Recommendation
Sairamvinay Vijayaraghavan
Prasant Mohapatra
AAML
23
0
0
03 May 2024
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
Evandro S. Ortigossa
Fábio F. Dias
Brian Barr
Claudio T. Silva
L. G. Nonato
FAtt
54
2
0
25 Apr 2024
Interpretable Prediction and Feature Selection for Survival Analysis
Interpretable Prediction and Feature Selection for Survival Analysis
Mike Van Ness
Madeleine Udell
39
2
0
23 Apr 2024
Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance
  Propagation
Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance Propagation
Paulo Yanez Sarmiento
Simon Witzke
Nadja Klein
Bernhard Y. Renard
FAtt
AAML
38
0
0
22 Apr 2024
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron
  Activation Analysis
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis
Abhilekha Dalal
R. Rayan
Adrita Barua
Eugene Y. Vasserman
Md Kamruzzaman Sarker
Pascal Hitzler
27
4
0
21 Apr 2024
Accurate estimation of feature importance faithfulness for tree models
Accurate estimation of feature importance faithfulness for tree models
Mateusz Gajewski
Adam Karczmarz
Mateusz Rapicki
Piotr Sankowski
37
0
0
04 Apr 2024
Are Classification Robustness and Explanation Robustness Really Strongly
  Correlated? An Analysis Through Input Loss Landscape
Are Classification Robustness and Explanation Robustness Really Strongly Correlated? An Analysis Through Input Loss Landscape
Tiejin Chen
Wenwang Huang
Linsey Pang
Dongsheng Luo
Hua Wei
OOD
41
0
0
09 Mar 2024
Identifying Drivers of Predictive Aleatoric Uncertainty
Identifying Drivers of Predictive Aleatoric Uncertainty
Pascal Iversen
Simon Witzke
Katharina Baum
Bernhard Y. Renard
UD
43
1
0
12 Dec 2023
Interpretability-Aware Vision Transformer
Interpretability-Aware Vision Transformer
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
82
7
0
14 Sep 2023
Confident Feature Ranking
Confident Feature Ranking
Bitya Neuhof
Y. Benjamini
FAtt
19
3
0
28 Jul 2023
A New Perspective on Evaluation Methods for Explainable Artificial
  Intelligence (XAI)
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Timo Speith
Markus Langer
26
12
0
26 Jul 2023
Robust Ranking Explanations
Robust Ranking Explanations
Chao Chen
Chenghua Guo
Guixiang Ma
Ming Zeng
Xi Zhang
Sihong Xie
FAtt
AAML
35
0
0
08 Jul 2023
Explainable Predictive Maintenance
Explainable Predictive Maintenance
Sepideh Pashami
Sławomir Nowaczyk
Yuantao Fan
Jakub Jakubowski
Nuno Paiva
...
Bruno Veloso
M. Sayed-Mouchaweh
L. Rajaoarisoa
Grzegorz J. Nalepa
João Gama
32
8
0
08 Jun 2023
Rectifying Group Irregularities in Explanations for Distribution Shift
Rectifying Group Irregularities in Explanations for Distribution Shift
Adam Stein
Yinjun Wu
Eric Wong
Mayur Naik
27
1
0
25 May 2023
BELLA: Black box model Explanations by Local Linear Approximations
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
31
1
0
18 May 2023
The Generalizability of Explanations
The Generalizability of Explanations
Hanxiao Tan
FAtt
13
1
0
23 Feb 2023
A novel approach to generate datasets with XAI ground truth to evaluate
  image models
A novel approach to generate datasets with XAI ground truth to evaluate image models
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
14
4
0
11 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
40
3
0
07 Feb 2023
Certified Interpretability Robustness for Class Activation Mapping
Certified Interpretability Robustness for Class Activation Mapping
Alex Gu
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Lucani E. Daniel
AAML
18
2
0
26 Jan 2023
The State of the Art in Enhancing Trust in Machine Learning Models with
  the Use of Visualizations
The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations
Angelos Chatzimparmpas
R. Martins
I. Jusufi
K. Kucher
Fabrice Rossi
A. Kerren
FAtt
24
160
0
22 Dec 2022
This changes to that : Combining causal and non-causal explanations to
  generate disease progression in capsule endoscopy
This changes to that : Combining causal and non-causal explanations to generate disease progression in capsule endoscopy
Anuja Vats
A. Mohammed
Marius Pedersen
Nirmalie Wiratunga
MedIm
22
9
0
05 Dec 2022
Explainability in Practice: Estimating Electrification Rates from Mobile
  Phone Data in Senegal
Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in Senegal
Laura State
Hadrien Salat
S. Rubrichi
Z. Smoreda
11
1
0
11 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Mitigating Covertly Unsafe Text within Natural Language Systems
Mitigating Covertly Unsafe Text within Natural Language Systems
Alex Mei
Anisha Kabir
Sharon Levy
Melanie Subbiah
Emily Allaway
J. Judge
D. Patton
Bruce Bimber
Kathleen McKeown
William Yang Wang
47
13
0
17 Oct 2022
Machine Learning in Transaction Monitoring: The Prospect of xAI
Machine Learning in Transaction Monitoring: The Prospect of xAI
Julie Gerlings
Ioanna D. Constantiou
17
2
0
14 Oct 2022
What the DAAM: Interpreting Stable Diffusion Using Cross Attention
What the DAAM: Interpreting Stable Diffusion Using Cross Attention
Raphael Tang
Linqing Liu
Akshat Pandey
Zhiying Jiang
Gefei Yang
K. Kumar
Pontus Stenetorp
Jimmy J. Lin
Ferhan Ture
26
167
0
10 Oct 2022
Greybox XAI: a Neural-Symbolic learning framework to produce
  interpretable predictions for image classification
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Adrien Bennetot
Gianni Franchi
Javier Del Ser
Raja Chatila
Natalia Díaz Rodríguez
AAML
25
29
0
26 Sep 2022
Explainable AI for clinical and remote health applications: a survey on
  tabular and time series data
Explainable AI for clinical and remote health applications: a survey on tabular and time series data
Flavio Di Martino
Franca Delmastro
AI4TS
26
91
0
14 Sep 2022
Macroeconomic Predictions using Payments Data and Machine Learning
Macroeconomic Predictions using Payments Data and Machine Learning
James T. E. Chapman
Ajit Desai
14
15
0
02 Sep 2022
Interpretable (not just posthoc-explainable) medical claims modeling for
  discharge placement to prevent avoidable all-cause readmissions or death
Interpretable (not just posthoc-explainable) medical claims modeling for discharge placement to prevent avoidable all-cause readmissions or death
Joshua C. Chang
Ted L. Chang
Carson C. Chow
R. Mahajan
Sonya Mahajan
Joe Maisog
Shashaank Vattikuti
Hongjing Xia
FAtt
OOD
32
0
0
28 Aug 2022
SoK: Explainable Machine Learning for Computer Security Applications
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
29
40
0
22 Aug 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
44
18
0
31 May 2022
ExSum: From Local Explanations to Model Understanding
ExSum: From Local Explanations to Model Understanding
Yilun Zhou
Marco Tulio Ribeiro
J. Shah
FAtt
LRM
11
25
0
30 Apr 2022
Can Rationalization Improve Robustness?
Can Rationalization Improve Robustness?
Howard Chen
Jacqueline He
Karthik Narasimhan
Danqi Chen
AAML
16
40
0
25 Apr 2022
Interpretation of Black Box NLP Models: A Survey
Interpretation of Black Box NLP Models: A Survey
Shivani Choudhary
N. Chatterjee
S. K. Saha
FAtt
32
10
0
31 Mar 2022
Robustness and Usefulness in AI Explanation Methods
Robustness and Usefulness in AI Explanation Methods
Erick Galinkin
FAtt
22
1
0
07 Mar 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
177
185
0
03 Feb 2022
Why Are You Weird? Infusing Interpretability in Isolation Forest for
  Anomaly Detection
Why Are You Weird? Infusing Interpretability in Isolation Forest for Anomaly Detection
Nirmal Sobha Kartha
Clément Gautrais
Vincent Vercruyssen
9
6
0
13 Dec 2021
LIMEcraft: Handcrafted superpixel selection and inspection for Visual
  eXplanations
LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations
Weronika Hryniewska
Adrianna Grudzieñ
P. Biecek
FAtt
48
3
0
15 Nov 2021
Defense Against Explanation Manipulation
Defense Against Explanation Manipulation
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Xia Hu
AAML
39
11
0
08 Nov 2021
A Survey on the Robustness of Feature Importance and Counterfactual
  Explanations
A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Saumitra Mishra
Sanghamitra Dutta
Jason Long
Daniele Magazzeni
AAML
9
58
0
30 Oct 2021
The Irrationality of Neural Rationale Models
The Irrationality of Neural Rationale Models
Yiming Zheng
Serena Booth
J. Shah
Yilun Zhou
27
16
0
14 Oct 2021
A Field Guide to Scientific XAI: Transparent and Interpretable Deep
  Learning for Bioinformatics Research
A Field Guide to Scientific XAI: Transparent and Interpretable Deep Learning for Bioinformatics Research
Thomas P. Quinn
Sunil R. Gupta
Svetha Venkatesh
Vuong Le
OOD
9
2
0
13 Oct 2021
12
Next