ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07538
  4. Cited By
Towards Robust Interpretability with Self-Explaining Neural Networks

Towards Robust Interpretability with Self-Explaining Neural Networks

20 June 2018
David Alvarez-Melis
Tommi Jaakkola
    MILM
    XAI
ArXivPDFHTML

Papers citing "Towards Robust Interpretability with Self-Explaining Neural Networks"

50 / 507 papers shown
Title
Enhanced Prototypical Part Network (EPPNet) For Explainable Image
  Classification Via Prototypes
Enhanced Prototypical Part Network (EPPNet) For Explainable Image Classification Via Prototypes
Bhushan Atote
Victor Sanchez
18
0
0
08 Aug 2024
Explain via Any Concept: Concept Bottleneck Model with Open Vocabulary
  Concepts
Explain via Any Concept: Concept Bottleneck Model with Open Vocabulary Concepts
Andong Tan
Fengtao Zhou
Hao Chen
VLM
24
3
0
05 Aug 2024
META-ANOVA: Screening interactions for interpretable machine learning
META-ANOVA: Screening interactions for interpretable machine learning
Daniel A. Serino
Marc L. Klasky
Chanmoo Park
Dongha Kim
Yongdai Kim
30
0
0
02 Aug 2024
Revisiting the robustness of post-hoc interpretability methods
Revisiting the robustness of post-hoc interpretability methods
Jiawen Wei
Hugues Turbé
G. Mengaldo
AAML
39
4
0
29 Jul 2024
CoLiDR: Concept Learning using Aggregated Disentangled Representations
CoLiDR: Concept Learning using Aggregated Disentangled Representations
Sanchit Sinha
Guangzhi Xiong
Aidong Zhang
34
0
0
27 Jul 2024
Interpretable Concept-Based Memory Reasoning
Interpretable Concept-Based Memory Reasoning
David Debot
Pietro Barbiero
Francesco Giannini
Gabriele Ciravegna
Michelangelo Diligenti
Giuseppe Marra
LRM
31
7
0
22 Jul 2024
MAVEN-Fact: A Large-scale Event Factuality Detection Dataset
MAVEN-Fact: A Large-scale Event Factuality Detection Dataset
Chunyang Li
Hao Peng
Xiaozhi Wang
Y. Qi
Lei Hou
Bin Xu
Juanzi Li
HILM
35
1
0
22 Jul 2024
CoxSE: Exploring the Potential of Self-Explaining Neural Networks with
  Cox Proportional Hazards Model for Survival Analysis
CoxSE: Exploring the Potential of Self-Explaining Neural Networks with Cox Proportional Hazards Model for Survival Analysis
Abdallah Alabdallah
Omar Hamed
Mattias Ohlsson
Thorsteinn Rögnvaldsson
Sepideh Pashami
41
1
0
18 Jul 2024
Benchmarking the Attribution Quality of Vision Models
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
34
3
0
16 Jul 2024
Explainability of Sub-Field Level Crop Yield Prediction using Remote
  Sensing
Explainability of Sub-Field Level Crop Yield Prediction using Remote Sensing
Hiba Najjar
Miro Miranda
Marlon Nuske
R. Roscher
A. Dengel
18
0
0
11 Jul 2024
Explainable Image Recognition via Enhanced Slot-attention Based
  Classifier
Explainable Image Recognition via Enhanced Slot-attention Based Classifier
Bowen Wang
Liangzhi Li
Jiahao Zhang
Yuta Nakashima
Hajime Nagahara
OCL
44
0
0
08 Jul 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
63
1
0
01 Jul 2024
Axiomatization of Gradient Smoothing in Neural Networks
Axiomatization of Gradient Smoothing in Neural Networks
Linjiang Zhou
Xiaochuan Shi
Chao Ma
Zepeng Wang
FAtt
26
0
0
29 Jun 2024
Self-supervised Interpretable Concept-based Models for Text
  Classification
Self-supervised Interpretable Concept-based Models for Text Classification
Francesco De Santis
Philippe Bich
Gabriele Ciravegna
Pietro Barbiero
Danilo Giordano
Tania Cerquitelli
34
0
0
20 Jun 2024
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations
  for Vision Foundation Models
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models
Hengyi Wang
Shiwei Tan
Hao Wang
BDL
42
6
0
18 Jun 2024
Discovering influential text using convolutional neural networks
Discovering influential text using convolutional neural networks
Megan Ayers
Luke Sanford
Margaret E. Roberts
Eddie Yang
34
0
0
14 Jun 2024
Neural Concept Binder
Neural Concept Binder
Wolfgang Stammer
Antonia Wüst
David Steinmann
Kristian Kersting
OCL
34
4
0
14 Jun 2024
ConceptHash: Interpretable Fine-Grained Hashing via Concept Discovery
ConceptHash: Interpretable Fine-Grained Hashing via Concept Discovery
Kam Woh Ng
Xiatian Zhu
Yi-Zhe Song
Tao Xiang
37
2
0
12 Jun 2024
Applications of Explainable artificial intelligence in Earth system
  science
Applications of Explainable artificial intelligence in Earth system science
Feini Huang
Shijie Jiang
Lu Li
Yongkun Zhang
Ye Zhang
Ruqing Zhang
Qingliang Li
Danxi Li
Wei Shangguan
Yongjiu Dai
38
2
0
12 Jun 2024
A Concept-Based Explainability Framework for Large Multimodal Models
A Concept-Based Explainability Framework for Large Multimodal Models
Jayneel Parekh
Pegah Khayatan
Mustafa Shukor
A. Newson
Matthieu Cord
40
16
0
12 Jun 2024
How Interpretable Are Interpretable Graph Neural Networks?
How Interpretable Are Interpretable Graph Neural Networks?
Yongqiang Chen
Yatao Bian
Bo Han
James Cheng
49
4
0
12 Jun 2024
Attri-Net: A Globally and Locally Inherently Interpretable Model for
  Multi-Label Classification Using Class-Specific Counterfactuals
Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals
Susu Sun
S. Woerner
Andreas Maier
Lisa M. Koch
Christian F. Baumgartner
FAtt
27
1
0
08 Jun 2024
Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for
  Explaining Language Model Predictions
Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for Explaining Language Model Predictions
Jingtan Wang
Xiaoqiang Lin
Rui Qiao
Chuan-Sheng Foo
Bryan Kian Hsiang Low
TDI
37
3
0
07 Jun 2024
Post-hoc Part-prototype Networks
Post-hoc Part-prototype Networks
Andong Tan
Fengtao Zhou
Hao Chen
30
5
0
05 Jun 2024
Expected Grad-CAM: Towards gradient faithfulness
Expected Grad-CAM: Towards gradient faithfulness
Vincenzo Buono
Peyman Sheikholharam Mashhadi
M. Rahat
Prayag Tiwari
Stefan Byttner
FAtt
31
1
0
03 Jun 2024
A Sim2Real Approach for Identifying Task-Relevant Properties in
  Interpretable Machine Learning
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning
Eura Nofshin
Esther Brown
Brian Lim
Weiwei Pan
Finale Doshi-Velez
40
0
0
31 May 2024
Weak Robust Compatibility Between Learning Algorithms and Counterfactual
  Explanation Generation Algorithms
Weak Robust Compatibility Between Learning Algorithms and Counterfactual Explanation Generation Algorithms
Ao Xu
Tieru Wu
35
1
0
31 May 2024
On Generating Monolithic and Model Reconciling Explanations in
  Probabilistic Scenarios
On Generating Monolithic and Model Reconciling Explanations in Probabilistic Scenarios
Stylianos Loukas Vasileiou
William Yeoh
Alessandro Previti
Tran Cao Son
11
0
0
29 May 2024
LucidPPN: Unambiguous Prototypical Parts Network for User-centric
  Interpretable Computer Vision
LucidPPN: Unambiguous Prototypical Parts Network for User-centric Interpretable Computer Vision
Mateusz Pach
Dawid Rymarczyk
K. Lewandowska
Jacek Tabor
Bartosz Zieliñski
34
7
0
23 May 2024
Towards a Unified Framework for Evaluating Explanations
Towards a Unified Framework for Evaluating Explanations
Juan D. Pinto
Luc Paquette
29
1
0
22 May 2024
Why do explanations fail? A typology and discussion on failures in XAI
Why do explanations fail? A typology and discussion on failures in XAI
Clara Bove
Thibault Laugel
Marie-Jeanne Lesot
C. Tijus
Marcin Detyniecki
31
2
0
22 May 2024
WISER: Weak supervISion and supErvised Representation learning to
  improve drug response prediction in cancer
WISER: Weak supervISion and supErvised Representation learning to improve drug response prediction in cancer
Kumar Shubham
A. Jayagopal
Syed Mohammed Danish
AP Prathosh
Vaibhav Rajan
OOD
32
3
0
07 May 2024
Explainable Interface for Human-Autonomy Teaming: A Survey
Explainable Interface for Human-Autonomy Teaming: A Survey
Xiangqi Kong
Yang Xing
Antonios Tsourdos
Ziyue Wang
Weisi Guo
Adolfo Perrusquía
Andreas Wikander
40
3
0
04 May 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
42
5
0
03 May 2024
Stability of Explainable Recommendation
Stability of Explainable Recommendation
Sairamvinay Vijayaraghavan
Prasant Mohapatra
AAML
38
1
0
03 May 2024
Improving Intervention Efficacy via Concept Realignment in Concept
  Bottleneck Models
Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models
Nishad Singhi
Jae Myung Kim
Karsten Roth
Zeynep Akata
48
1
0
02 May 2024
A Self-explaining Neural Architecture for Generalizable Concept Learning
A Self-explaining Neural Architecture for Generalizable Concept Learning
Sanchit Sinha
Guangzhi Xiong
Aidong Zhang
27
1
0
01 May 2024
Rad4XCNN: a new agnostic method for post-hoc global explanation of CNN-derived features by means of radiomics
Rad4XCNN: a new agnostic method for post-hoc global explanation of CNN-derived features by means of radiomics
Francesco Prinzi
C. Militello
Calogero Zarcaro
T. Bartolotta
Salvatore Gaglio
Salvatore Vitabile
21
1
0
26 Apr 2024
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
Evandro S. Ortigossa
Fábio F. Dias
Brian Barr
Claudio T. Silva
L. G. Nonato
FAtt
61
2
0
25 Apr 2024
How should AI decisions be explained? Requirements for Explanations from
  the Perspective of European Law
How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law
Benjamin Frész
Elena Dubovitskaya
Danilo Brajovic
Marco F. Huber
Christian Horz
51
7
0
19 Apr 2024
Toward Understanding the Disagreement Problem in Neural Network Feature
  Attribution
Toward Understanding the Disagreement Problem in Neural Network Feature Attribution
Niklas Koenen
Marvin N. Wright
FAtt
42
5
0
17 Apr 2024
Interpretability in Symbolic Regression: a benchmark of Explanatory
  Methods using the Feynman data set
Interpretability in Symbolic Regression: a benchmark of Explanatory Methods using the Feynman data set
Guilherme Seidyo Imai Aldeia
Fabrício Olivetti de França
34
10
0
08 Apr 2024
An Interpretable Power System Transient Stability Assessment Method with
  Expert Guiding Neural-Regression-Tree
An Interpretable Power System Transient Stability Assessment Method with Expert Guiding Neural-Regression-Tree
Hanxuan Wang
Na Lu
Zixuan Wang
Jiacheng Liu
Jun Liu
23
0
0
03 Apr 2024
Source-Aware Training Enables Knowledge Attribution in Language Models
Source-Aware Training Enables Knowledge Attribution in Language Models
Muhammad Khalifa
David Wadden
Emma Strubell
Honglak Lee
Lu Wang
Iz Beltagy
Hao Peng
HILM
39
14
0
01 Apr 2024
A Survey of Privacy-Preserving Model Explanations: Privacy Risks,
  Attacks, and Countermeasures
A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures
Thanh Tam Nguyen
T. T. Huynh
Zhao Ren
Thanh Toan Nguyen
Phi Le Nguyen
Hongzhi Yin
Quoc Viet Hung Nguyen
65
8
0
31 Mar 2024
Neural Clustering based Visual Representation Learning
Neural Clustering based Visual Representation Learning
Guikun Chen
Xia Li
Yi Yang
Wenguan Wang
SSL
37
8
0
26 Mar 2024
A survey on Concept-based Approaches For Model Improvement
A survey on Concept-based Approaches For Model Improvement
Avani Gupta
P. J. Narayanan
LRM
29
5
0
21 Mar 2024
Learning Decomposable and Debiased Representations via Attribute-Centric
  Information Bottlenecks
Learning Decomposable and Debiased Representations via Attribute-Centric Information Bottlenecks
Jinyung Hong
Eunyeong Jeon
Changhoon Kim
Keun Hee Park
Utkarsh Nath
Yezhou Yang
P. Turaga
Theodore P. Pavlic
CML
36
0
0
21 Mar 2024
Towards White Box Deep Learning
Towards White Box Deep Learning
Maciej Satkiewicz
AAML
34
1
0
14 Mar 2024
Upper Bound of Bayesian Generalization Error in Partial Concept
  Bottleneck Model (CBM): Partial CBM outperforms naive CBM
Upper Bound of Bayesian Generalization Error in Partial Concept Bottleneck Model (CBM): Partial CBM outperforms naive CBM
Naoki Hayashi
Yoshihide Sawada
33
0
0
14 Mar 2024
Previous
12345...91011
Next