ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07538
  4. Cited By
Towards Robust Interpretability with Self-Explaining Neural Networks

Towards Robust Interpretability with Self-Explaining Neural Networks

20 June 2018
David Alvarez-Melis
Tommi Jaakkola
    MILM
    XAI
ArXivPDFHTML

Papers citing "Towards Robust Interpretability with Self-Explaining Neural Networks"

50 / 507 papers shown
Title
Valid P-Value for Deep Learning-Driven Salient Region
Valid P-Value for Deep Learning-Driven Salient Region
Daiki Miwa
Vo Nguyen Le Duy
I. Takeuchi
FAtt
AAML
29
14
0
06 Jan 2023
DANLIP: Deep Autoregressive Networks for Locally Interpretable
  Probabilistic Forecasting
DANLIP: Deep Autoregressive Networks for Locally Interpretable Probabilistic Forecasting
Ozan Ozyegen
Juyoung Wang
Mucahit Cevik
BDL
AI4TS
6
1
0
05 Jan 2023
VCNet: A self-explaining model for realistic counterfactual generation
VCNet: A self-explaining model for realistic counterfactual generation
Victor Guyomard
Franccoise Fessant
Thomas Guyet
Tassadit Bouadi
Alexandre Termier
BDL
OOD
CML
12
24
0
21 Dec 2022
Evaluation and Improvement of Interpretability for Self-Explainable
  Part-Prototype Networks
Evaluation and Improvement of Interpretability for Self-Explainable Part-Prototype Networks
Qihan Huang
Mengqi Xue
Wenqi Huang
Haofei Zhang
Jie Song
Yongcheng Jing
Mingli Song
AAML
24
26
0
12 Dec 2022
Causality-Aware Local Interpretable Model-Agnostic Explanations
Causality-Aware Local Interpretable Model-Agnostic Explanations
Martina Cinquini
Riccardo Guidotti
CML
49
1
0
10 Dec 2022
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
29
39
0
07 Dec 2022
Truthful Meta-Explanations for Local Interpretability of Machine
  Learning Models
Truthful Meta-Explanations for Local Interpretability of Machine Learning Models
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
18
3
0
07 Dec 2022
Learning to Select Prototypical Parts for Interpretable Sequential Data
  Modeling
Learning to Select Prototypical Parts for Interpretable Sequential Data Modeling
Yifei Zhang
Nengneng Gao
Cunqing Ma
15
6
0
07 Dec 2022
Explainability as statistical inference
Explainability as statistical inference
Hugo Senetaire
Damien Garreau
J. Frellsen
Pierre-Alexandre Mattei
FAtt
21
4
0
06 Dec 2022
Intermediate Entity-based Sparse Interpretable Representation Learning
Intermediate Entity-based Sparse Interpretable Representation Learning
Diego Garcia-Olano
Yasumasa Onoe
Joydeep Ghosh
Byron C. Wallace
19
2
0
03 Dec 2022
Evaluation of Explanation Methods of AI -- CNNs in Image Classification
  Tasks with Reference-based and No-reference Metrics
Evaluation of Explanation Methods of AI -- CNNs in Image Classification Tasks with Reference-based and No-reference Metrics
A. Zhukov
J. Benois-Pineau
R. Giot
14
5
0
02 Dec 2022
Understanding and Enhancing Robustness of Concept-based Models
Understanding and Enhancing Robustness of Concept-based Models
Sanchit Sinha
Mengdi Huai
Jianhui Sun
Aidong Zhang
AAML
28
18
0
29 Nov 2022
Shortcomings of Top-Down Randomization-Based Sanity Checks for
  Evaluations of Deep Neural Network Explanations
Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations
Alexander Binder
Leander Weber
Sebastian Lapuschkin
G. Montavon
Klaus-Robert Muller
Wojciech Samek
FAtt
AAML
11
22
0
22 Nov 2022
Explainability Via Causal Self-Talk
Explainability Via Causal Self-Talk
Nicholas A. Roy
Junkyung Kim
Neil C. Rabinowitz
CML
23
7
0
17 Nov 2022
Improving Interpretability via Regularization of Neural Activation
  Sensitivity
Improving Interpretability via Regularization of Neural Activation Sensitivity
Ofir Moshe
Gil Fidel
Ron Bitton
A. Shabtai
AAML
AI4CE
30
3
0
16 Nov 2022
Explaining Cross-Domain Recognition with Interpretable Deep Classifier
Explaining Cross-Domain Recognition with Interpretable Deep Classifier
Yiheng Zhang
Ting Yao
Zhaofan Qiu
Tao Mei
OOD
29
3
0
15 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
33
18
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
SoK: Modeling Explainability in Security Analytics for Interpretability,
  Trustworthiness, and Usability
SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability
Dipkamal Bhusal
Rosalyn Shin
Ajay Ashok Shewale
M. K. Veerabhadran
Michael Clifford
Sara Rampazzi
Nidhi Rastogi
FAtt
AAML
32
5
0
31 Oct 2022
Does Self-Rationalization Improve Robustness to Spurious Correlations?
Does Self-Rationalization Improve Robustness to Spurious Correlations?
Alexis Ross
Matthew E. Peters
Ana Marasović
LRM
24
11
0
24 Oct 2022
Towards Human-centered Explainable AI: A Survey of User Studies for
  Model Explanations
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
36
92
0
20 Oct 2022
Providing Error Detection for Deep Learning Image Classifiers Using
  Self-Explainability
Providing Error Detection for Deep Learning Image Classifiers Using Self-Explainability
M. M. Karimi
Azin Heidarshenas
W. Edmonson
17
0
0
15 Oct 2022
ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model
ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model
Srishti Gautam
Ahcène Boubekki
Stine Hansen
Suaiba Amina Salahuddin
Robert Jenssen
Marina M.-C. Höhne
Michael C. Kampffmeyer
28
36
0
15 Oct 2022
Self-explaining deep models with logic rule reasoning
Self-explaining deep models with logic rule reasoning
Seungeon Lee
Xiting Wang
Sungwon Han
Xiaoyuan Yi
Xing Xie
M. Cha
NAI
ReLM
LRM
29
16
0
13 Oct 2022
Interpreting Neural Policies with Disentangled Tree Representations
Interpreting Neural Policies with Disentangled Tree Representations
Tsun-Hsuan Wang
Wei Xiao
Tim Seyde
Ramin Hasani
Daniela Rus
DRL
27
2
0
13 Oct 2022
REV: Information-Theoretic Evaluation of Free-Text Rationales
REV: Information-Theoretic Evaluation of Free-Text Rationales
Hanjie Chen
Faeze Brahman
Xiang Ren
Yangfeng Ji
Yejin Choi
Swabha Swayamdipta
89
23
0
10 Oct 2022
Self-explaining Hierarchical Model for Intraoperative Time Series
Self-explaining Hierarchical Model for Intraoperative Time Series
Dingwen Li
Bing Xue
C. King
Bradley A. Fritz
M. Avidan
Joanna Abraham
Chenyang Lu
AI4CE
21
3
0
10 Oct 2022
Dynamic Latent Separation for Deep Learning
Dynamic Latent Separation for Deep Learning
Yi-Lin Tuan
Zih-Yun Chiu
William Yang Wang
30
0
0
07 Oct 2022
Quantitative Metrics for Evaluating Explanations of Video DeepFake
  Detectors
Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors
Federico Baldassarre
Quentin Debard
Gonzalo Fiz Pontiveros
Tri Kurniawan Wijaya
44
4
0
07 Oct 2022
ProGReST: Prototypical Graph Regression Soft Trees for Molecular
  Property Prediction
ProGReST: Prototypical Graph Regression Soft Trees for Molecular Property Prediction
Dawid Rymarczyk
D. Dobrowolski
Tomasz Danel
35
3
0
07 Oct 2022
Towards Prototype-Based Self-Explainable Graph Neural Network
Towards Prototype-Based Self-Explainable Graph Neural Network
Enyan Dai
Suhang Wang
33
12
0
05 Oct 2022
Greybox XAI: a Neural-Symbolic learning framework to produce
  interpretable predictions for image classification
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Adrien Bennetot
Gianni Franchi
Javier Del Ser
Raja Chatila
Natalia Díaz Rodríguez
AAML
32
29
0
26 Sep 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
109
107
0
22 Sep 2022
Concept Activation Regions: A Generalized Framework For Concept-Based
  Explanations
Concept Activation Regions: A Generalized Framework For Concept-Based Explanations
Jonathan Crabbé
M. Schaar
59
46
0
22 Sep 2022
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
M. Zarlenga
Pietro Barbiero
Gabriele Ciravegna
G. Marra
Francesco Giannini
...
F. Precioso
S. Melacci
Adrian Weller
Pietro Lio'
M. Jamnik
79
52
0
19 Sep 2022
Visual Recognition with Deep Nearest Centroids
Visual Recognition with Deep Nearest Centroids
Wenguan Wang
Cheng Han
Tianfei Zhou
Dongfang Liu
54
91
0
15 Sep 2022
Explainable AI for clinical and remote health applications: a survey on
  tabular and time series data
Explainable AI for clinical and remote health applications: a survey on tabular and time series data
Flavio Di Martino
Franca Delmastro
AI4TS
28
90
0
14 Sep 2022
Generating detailed saliency maps using model-agnostic methods
Generating detailed saliency maps using model-agnostic methods
Maciej Sakowicz
FAtt
21
0
0
04 Sep 2022
Model Transparency and Interpretability : Survey and Application to the
  Insurance Industry
Model Transparency and Interpretability : Survey and Application to the Insurance Industry
Dimitri Delcaillau
Antoine Ly
Alizé Papp
Franck Vermet
AI4CE
8
11
0
01 Sep 2022
Formalising the Robustness of Counterfactual Explanations for Neural
  Networks
Formalising the Robustness of Counterfactual Explanations for Neural Networks
Junqi Jiang
Francesco Leofante
Antonio Rago
Francesca Toni
AAML
27
26
0
31 Aug 2022
The Alignment Problem from a Deep Learning Perspective
The Alignment Problem from a Deep Learning Perspective
Richard Ngo
Lawrence Chan
Sören Mindermann
59
183
0
30 Aug 2022
Explainable AI for tailored electricity consumption feedback -- an
  experimental evaluation of visualizations
Explainable AI for tailored electricity consumption feedback -- an experimental evaluation of visualizations
Jacqueline Wastensteiner
T. Weiß
Felix Haag
K. Hopf
25
11
0
24 Aug 2022
Safety Assessment for Autonomous Systems' Perception Capabilities
Safety Assessment for Autonomous Systems' Perception Capabilities
J. Molloy
John McDermid
24
4
0
17 Aug 2022
An Empirical Comparison of Explainable Artificial Intelligence Methods
  for Clinical Data: A Case Study on Traumatic Brain Injury
An Empirical Comparison of Explainable Artificial Intelligence Methods for Clinical Data: A Case Study on Traumatic Brain Injury
Amin Nayebi
Sindhu Tipirneni
Brandon Foreman
Chandan K. Reddy
V. Subbian
31
3
0
13 Aug 2022
Explaining Classifiers Trained on Raw Hierarchical Multiple-Instance
  Data
Explaining Classifiers Trained on Raw Hierarchical Multiple-Instance Data
Tomás Pevný
Viliam Lisý
B. Bosanský
P. Somol
Michal Pěchouček
25
1
0
04 Aug 2022
Leveraging Explanations in Interactive Machine Learning: An Overview
Leveraging Explanations in Interactive Machine Learning: An Overview
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
XAI
FAtt
LRM
26
62
0
29 Jul 2022
Encoding Concepts in Graph Neural Networks
Encoding Concepts in Graph Neural Networks
Lucie Charlotte Magister
Pietro Barbiero
Dmitry Kazhdan
F. Siciliano
Gabriele Ciravegna
Fabrizio Silvestri
M. Jamnik
Pietro Lio'
30
21
0
27 Jul 2022
Toward Transparent AI: A Survey on Interpreting the Inner Structures of
  Deep Neural Networks
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAML
AI4CE
23
124
0
27 Jul 2022
Static and Dynamic Concepts for Self-supervised Video Representation
  Learning
Static and Dynamic Concepts for Self-supervised Video Representation Learning
Rui Qian
Shuangrui Ding
Xian Liu
Dahua Lin
SSL
36
22
0
26 Jul 2022
Stream-based active learning with linear models
Stream-based active learning with linear models
Davide Cacciarelli
M. Kulahci
J. Tyssedal
8
13
0
20 Jul 2022
Previous
123456...91011
Next