ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.08049
  4. Cited By
On the Robustness of Interpretability Methods

On the Robustness of Interpretability Methods

21 June 2018
David Alvarez-Melis
Tommi Jaakkola
ArXivPDFHTML

Papers citing "On the Robustness of Interpretability Methods"

24 / 74 papers shown
Title
RKHS-SHAP: Shapley Values for Kernel Methods
RKHS-SHAP: Shapley Values for Kernel Methods
Siu Lun Chau
Robert Hu
Javier I. González
Dino Sejdinovic
FAtt
21
15
0
18 Oct 2021
The Irrationality of Neural Rationale Models
The Irrationality of Neural Rationale Models
Yiming Zheng
Serena Booth
J. Shah
Yilun Zhou
32
16
0
14 Oct 2021
A Field Guide to Scientific XAI: Transparent and Interpretable Deep
  Learning for Bioinformatics Research
A Field Guide to Scientific XAI: Transparent and Interpretable Deep Learning for Bioinformatics Research
Thomas P. Quinn
Sunil R. Gupta
Svetha Venkatesh
Vuong Le
OOD
14
2
0
13 Oct 2021
XPROAX-Local explanations for text classification with progressive
  neighborhood approximation
XPROAX-Local explanations for text classification with progressive neighborhood approximation
Yi Cai
Arthur Zimek
Eirini Ntoutsi
25
5
0
30 Sep 2021
Diagnostics-Guided Explanation Generation
Diagnostics-Guided Explanation Generation
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
LRM
FAtt
38
6
0
08 Sep 2021
Quantifying Explainability in NLP and Analyzing Algorithms for
  Performance-Explainability Tradeoff
Quantifying Explainability in NLP and Analyzing Algorithms for Performance-Explainability Tradeoff
Michael J. Naylor
C. French
Samantha R. Terker
Uday Kamath
36
10
0
12 Jul 2021
On Locality of Local Explanation Models
On Locality of Local Explanation Models
Sahra Ghalebikesabi
Lucile Ter-Minassian
Karla Diaz-Ordaz
Chris Holmes
FedML
FAtt
18
39
0
24 Jun 2021
3DB: A Framework for Debugging Computer Vision Models
3DB: A Framework for Debugging Computer Vision Models
Guillaume Leclerc
Hadi Salman
Andrew Ilyas
Sai H. Vemprala
Logan Engstrom
...
Pengchuan Zhang
Shibani Santurkar
Greg Yang
Ashish Kapoor
A. Madry
40
40
0
07 Jun 2021
On the Sensitivity and Stability of Model Interpretations in NLP
On the Sensitivity and Stability of Model Interpretations in NLP
Fan Yin
Zhouxing Shi
Cho-Jui Hsieh
Kai-Wei Chang
FAtt
11
33
0
18 Apr 2021
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Valerie Chen
Jeffrey Li
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
32
29
0
10 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence:
  Theory, Algorithms, and Applications
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
47
176
0
07 Mar 2021
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
136
119
0
21 Jan 2021
Explanation from Specification
Explanation from Specification
Harish Naik
Gyorgy Turán
XAI
27
0
0
13 Dec 2020
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Meike Nauta
Ron van Bree
C. Seifert
71
261
0
03 Dec 2020
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
29
48
0
19 Oct 2020
Captum: A unified and generic model interpretability library for PyTorch
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
29
821
0
16 Sep 2020
Explainable Artificial Intelligence for Process Mining: A General
  Overview and Application of a Novel Local Explanation Approach for Predictive
  Process Monitoring
Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring
Nijat Mehdiyev
Peter Fettke
AI4TS
25
55
0
04 Sep 2020
Explainable Predictive Process Monitoring
Explainable Predictive Process Monitoring
Musabir Musabayli
F. Maggi
Williams Rizzi
Josep Carmona
Chiara Di Francescomarino
11
60
0
04 Aug 2020
OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
Giorgio Visani
Enrico Bagli
F. Chesani
FAtt
24
60
0
10 Jun 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
38
370
0
30 Apr 2020
MonoNet: Towards Interpretable Models by Learning Monotonic Features
MonoNet: Towards Interpretable Models by Learning Monotonic Features
An-phi Nguyen
María Rodríguez Martínez
FAtt
16
13
0
30 Sep 2019
On The Stability of Interpretable Models
On The Stability of Interpretable Models
Riccardo Guidotti
Salvatore Ruggieri
FAtt
16
10
0
22 Oct 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
17
932
0
20 Jun 2018
A causal framework for explaining the predictions of black-box
  sequence-to-sequence models
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis
Tommi Jaakkola
CML
227
201
0
06 Jul 2017
Previous
12