ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07538
  4. Cited By
Towards Robust Interpretability with Self-Explaining Neural Networks

Towards Robust Interpretability with Self-Explaining Neural Networks

20 June 2018
David Alvarez-Melis
Tommi Jaakkola
    MILM
    XAI
ArXivPDFHTML

Papers citing "Towards Robust Interpretability with Self-Explaining Neural Networks"

50 / 507 papers shown
Title
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current
  Methods, Challenges, and Opportunities
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities
Subash Neupane
Jesse Ables
William Anderson
Sudip Mittal
Shahram Rahimi
I. Banicescu
Maria Seale
AAML
50
71
0
13 Jul 2022
Towards a More Rigorous Science of Blindspot Discovery in Image
  Classification Models
Towards a More Rigorous Science of Blindspot Discovery in Image Classification Models
Gregory Plumb
Nari Johnson
Ángel Alexander Cabrera
Ameet Talwalkar
40
5
0
08 Jul 2022
Interpretable by Design: Learning Predictors by Composing Interpretable
  Queries
Interpretable by Design: Learning Predictors by Composing Interpretable Queries
Aditya Chattopadhyay
Stewart Slocum
B. Haeffele
René Vidal
D. Geman
26
21
0
03 Jul 2022
Connecting Algorithmic Research and Usage Contexts: A Perspective of
  Contextualized Evaluation for Explainable AI
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI
Q. V. Liao
Yunfeng Zhang
Ronny Luss
Finale Doshi-Velez
Amit Dhurandhar
26
81
0
22 Jun 2022
Interpretable machine learning optimization (InterOpt) for operational
  parameters: a case study of highly-efficient shale gas development
Interpretable machine learning optimization (InterOpt) for operational parameters: a case study of highly-efficient shale gas development
Yuntian Chen
Dong-juan Zhang
Qun Zhao
D. Liu
11
6
0
20 Jun 2022
C-SENN: Contrastive Self-Explaining Neural Network
C-SENN: Contrastive Self-Explaining Neural Network
Yoshihide Sawada
Keigo Nakamura
SSL
16
8
0
20 Jun 2022
Machine Learning in Sports: A Case Study on Using Explainable Models for
  Predicting Outcomes of Volleyball Matches
Machine Learning in Sports: A Case Study on Using Explainable Models for Predicting Outcomes of Volleyball Matches
Abhinav Lalwani
Aman Saraiya
Apoorv Singh
Aditya Jain
T. Dash
11
8
0
18 Jun 2022
On the Bias-Variance Characteristics of LIME and SHAP in High Sparsity
  Movie Recommendation Explanation Tasks
On the Bias-Variance Characteristics of LIME and SHAP in High Sparsity Movie Recommendation Explanation Tasks
Claudia V. Roberts
Ehtsham Elahi
Ashok Chandrashekar
FAtt
11
4
0
09 Jun 2022
Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI
  Evaluation Methods into an Interactive and Multi-dimensional Benchmark
Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional Benchmark
Mohamed Karim Belaid
Eyke Hüllermeier
Maximilian Rabus
Ralf Krestel
ELM
16
0
0
08 Jun 2022
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Angie Boggust
Harini Suresh
Hendrik Strobelt
John Guttag
Arvindmani Satyanarayan
FAtt
XAI
30
8
0
07 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
22
24
0
05 Jun 2022
HEX: Human-in-the-loop Explainability via Deep Reinforcement Learning
HEX: Human-in-the-loop Explainability via Deep Reinforcement Learning
Michael T. Lash
24
0
0
02 Jun 2022
Interpretability Guarantees with Merlin-Arthur Classifiers
Interpretability Guarantees with Merlin-Arthur Classifiers
S. Wäldchen
Kartikey Sharma
Berkant Turan
Max Zimmer
Sebastian Pokutta
FAtt
24
4
0
01 Jun 2022
Composition of Relational Features with an Application to Explaining
  Black-Box Predictors
Composition of Relational Features with an Application to Explaining Black-Box Predictors
A. Srinivasan
A. Baskar
T. Dash
Devanshu Shah
CoGe
11
2
0
01 Jun 2022
Concept-level Debugging of Part-Prototype Networks
Concept-level Debugging of Part-Prototype Networks
A. Bontempelli
Stefano Teso
Katya Tentori
Fausto Giunchiglia
Andrea Passerini
24
52
0
31 May 2022
GlanceNets: Interpretabile, Leak-proof Concept-based Models
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
106
64
0
31 May 2022
Investigating the Benefits of Free-Form Rationales
Investigating the Benefits of Free-Form Rationales
Jiao Sun
Swabha Swayamdipta
Jonathan May
Xuezhe Ma
24
14
0
25 May 2022
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
Lijie Wang
Yaozong Shen
Shu-ping Peng
Shuai Zhang
Xinyan Xiao
Hao Liu
Hongxuan Tang
Ying Chen
Hua-Hong Wu
Haifeng Wang
ELM
19
21
0
23 May 2022
Exploring the Trade-off between Plausibility, Change Intensity and
  Adversarial Power in Counterfactual Explanations using Multi-objective
  Optimization
Exploring the Trade-off between Plausibility, Change Intensity and Adversarial Power in Counterfactual Explanations using Multi-objective Optimization
Javier Del Ser
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Francisco Herrera
Andreas Holzinger
AAML
49
4
0
20 May 2022
Visual Concepts Tokenization
Visual Concepts Tokenization
Tao Yang
Yuwang Wang
Yan Lu
Nanning Zheng
OCL
ViT
46
12
0
20 May 2022
The Solvability of Interpretability Evaluation Metrics
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
70
8
0
18 May 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aïvodji
Stephen H. Bach
Himabindu Lakkaraju
40
56
0
15 May 2022
ConceptDistil: Model-Agnostic Distillation of Concept Explanations
ConceptDistil: Model-Agnostic Distillation of Concept Explanations
Joao Bento Sousa
Ricardo Moreira
Vladimir Balayan
Pedro Saleiro
P. Bizarro
FAtt
14
3
0
07 May 2022
One-way Explainability Isn't The Message
One-way Explainability Isn't The Message
A. Srinivasan
Michael Bain
Enrico W. Coiera
16
2
0
05 May 2022
ExSum: From Local Explanations to Model Understanding
ExSum: From Local Explanations to Model Understanding
Yilun Zhou
Marco Tulio Ribeiro
J. Shah
FAtt
LRM
19
25
0
30 Apr 2022
Counterfactual Explanations for Natural Language Interfaces
Counterfactual Explanations for Natural Language Interfaces
George Tolkachev
Stephen Mell
Steve Zdancewic
Osbert Bastani
LRM
AAML
14
4
0
27 Apr 2022
Landing AI on Networks: An equipment vendor viewpoint on Autonomous
  Driving Networks
Landing AI on Networks: An equipment vendor viewpoint on Autonomous Driving Networks
Dario Rossi
Liang Zhang
33
13
0
26 Apr 2022
Proto2Proto: Can you recognize the car, the way I do?
Proto2Proto: Can you recognize the car, the way I do?
Monish Keswani
Sriranjani Ramakrishnan
Nishant Reddy
V. Balasubramanian
8
26
0
25 Apr 2022
A Set Membership Approach to Discovering Feature Relevance and
  Explaining Neural Classifier Decisions
A Set Membership Approach to Discovering Feature Relevance and Explaining Neural Classifier Decisions
S. P. Adam
A. Likas
9
0
0
05 Apr 2022
Provable concept learning for interpretable predictions using
  variational autoencoders
Provable concept learning for interpretable predictions using variational autoencoders
Armeen Taeb
Nicolò Ruggeri
Carina Schnuck
Fanny Yang
67
5
0
01 Apr 2022
Diffusion Models for Counterfactual Explanations
Diffusion Models for Counterfactual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
32
55
0
29 Mar 2022
A Unified Study of Machine Learning Explanation Evaluation Metrics
A Unified Study of Machine Learning Explanation Evaluation Metrics
Yipei Wang
Xiaoqian Wang
XAI
19
7
0
27 Mar 2022
Unsupervised Keyphrase Extraction via Interpretable Neural Networks
Unsupervised Keyphrase Extraction via Interpretable Neural Networks
Rishabh Joshi
Vidhisha Balachandran
Emily Saldanha
M. Glenski
Svitlana Volkova
Yulia Tsvetkov
SSL
13
1
0
15 Mar 2022
Don't Get Me Wrong: How to Apply Deep Visual Interpretations to Time
  Series
Don't Get Me Wrong: How to Apply Deep Visual Interpretations to Time Series
Christoffer Loeffler
Wei-Cheng Lai
Bjoern M. Eskofier
Dario Zanca
Lukas M. Schmidt
Christopher Mutschler
FAtt
AI4TS
35
5
0
14 Mar 2022
Understanding Person Identification through Gait
Understanding Person Identification through Gait
Simon Hanisch
Evelyn Muschter
Admantini Hatzipanayioti
Shu-Chen Li
Thorsten Strufe
CVBM
14
11
0
08 Mar 2022
Concept-based Explanations for Out-Of-Distribution Detectors
Concept-based Explanations for Out-Of-Distribution Detectors
Jihye Choi
Jayaram Raghuram
Ryan Feng
Jiefeng Chen
S. Jha
Atul Prakash
OODD
19
12
0
04 Mar 2022
Human-Centered Concept Explanations for Neural Networks
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
37
25
0
25 Feb 2022
Listen to Interpret: Post-hoc Interpretability for Audio Networks with
  NMF
Listen to Interpret: Post-hoc Interpretability for Audio Networks with NMF
Jayneel Parekh
Sanjeel Parekh
Pavlo Mozharovskyi
Florence dÁlché-Buc
G. Richard
21
22
0
23 Feb 2022
Hierarchical Interpretation of Neural Text Classification
Hierarchical Interpretation of Neural Text Classification
Hanqi Yan
Lin Gui
Yulan He
42
14
0
20 Feb 2022
Guidelines and Evaluation of Clinical Explainable AI in Medical Image
  Analysis
Guidelines and Evaluation of Clinical Explainable AI in Medical Image Analysis
Weina Jin
Xiaoxiao Li
M. Fatehi
Ghassan Hamarneh
ELM
XAI
42
88
0
16 Feb 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
  Network Explanations and Beyond
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
21
168
0
14 Feb 2022
A Lightweight, Efficient and Explainable-by-Design Convolutional Neural
  Network for Internet Traffic Classification
A Lightweight, Efficient and Explainable-by-Design Convolutional Neural Network for Internet Traffic Classification
Kevin Fauvel
Fuxing Chen
Dario Rossi
27
25
0
11 Feb 2022
Concept Bottleneck Model with Additional Unsupervised Concepts
Concept Bottleneck Model with Additional Unsupervised Concepts
Yoshihide Sawada
Keigo Nakamura
SSL
21
66
0
03 Feb 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
28
396
0
20 Jan 2022
Towards Automated Error Analysis: Learning to Characterize Errors
Towards Automated Error Analysis: Learning to Characterize Errors
Tong Gao
Shivang Singh
Raymond J. Mooney
14
1
0
13 Jan 2022
Explainable Artificial Intelligence Methods in Combating Pandemics: A
  Systematic Review
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
F. Giuste
Wenqi Shi
Yuanda Zhu
Tarun Naren
Monica Isgut
Ying Sha
L. Tong
Mitali S. Gupte
May D. Wang
24
73
0
23 Dec 2021
More Than Words: Towards Better Quality Interpretations of Text
  Classifiers
More Than Words: Towards Better Quality Interpretations of Text Classifiers
Muhammad Bilal Zafar
Philipp Schmidt
Michele Donini
Cédric Archambeau
F. Biessmann
Sanjiv Ranjan Das
K. Kenthapadi
FAtt
12
5
0
23 Dec 2021
RELAX: Representation Learning Explainability
RELAX: Representation Learning Explainability
Kristoffer Wickstrøm
Daniel J. Trosten
Sigurd Løkse
Ahcène Boubekki
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
Robert Jenssen
FAtt
13
14
0
19 Dec 2021
Interpretable and Interactive Deep Multiple Instance Learning for Dental
  Caries Classification in Bitewing X-rays
Interpretable and Interactive Deep Multiple Instance Learning for Dental Caries Classification in Bitewing X-rays
Benjamin Bergner
Csaba Rohrer
Aiham Taleb
Martha Duchrau
Guilherme De Leon
J. A. Rodrigues
F. Schwendicke
J. Krois
C. Lippert
25
1
0
17 Dec 2021
Utilizing XAI technique to improve autoencoder based model for computer
  network anomaly detection with shapley additive explanation(SHAP)
Utilizing XAI technique to improve autoencoder based model for computer network anomaly detection with shapley additive explanation(SHAP)
Khushnaseeb Roshan
Aasim Zafar
AAML
16
50
0
14 Dec 2021
Previous
123...567...91011
Next