ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.07814
  4. Cited By
Learning to Explain: An Information-Theoretic Perspective on Model
  Interpretation

Learning to Explain: An Information-Theoretic Perspective on Model Interpretation

21 February 2018
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
    MLT
    FAtt
ArXivPDFHTML

Papers citing "Learning to Explain: An Information-Theoretic Perspective on Model Interpretation"

50 / 302 papers shown
Title
What went wrong and when? Instance-wise Feature Importance for
  Time-series Models
What went wrong and when? Instance-wise Feature Importance for Time-series Models
S. Tonekaboni
Shalmali Joshi
Kieran Campbell
David Duvenaud
Anna Goldenberg
FAtt
OOD
AI4TS
53
14
0
05 Mar 2020
An Information-Theoretic Approach to Personalized Explainable Machine
  Learning
An Information-Theoretic Approach to Personalized Explainable Machine Learning
A. Jung
P. H. Nardelli
6
20
0
01 Mar 2020
Importance-Driven Deep Learning System Testing
Importance-Driven Deep Learning System Testing
Simos Gerasimou
Hasan Ferit Eniser
A. Sen
Alper Çakan
AAML
VLM
32
98
0
09 Feb 2020
DANCE: Enhancing saliency maps using decoys
DANCE: Enhancing saliency maps using decoys
Y. Lu
Wenbo Guo
Masashi Sugiyama
William Stafford Noble
AAML
40
14
0
03 Feb 2020
GraphLIME: Local Interpretable Model Explanations for Graph Neural
  Networks
GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
Q. Huang
M. Yamada
Yuan Tian
Dinesh Singh
Dawei Yin
Yi-Ju Chang
FAtt
37
345
0
17 Jan 2020
Clusters in Explanation Space: Inferring disease subtypes from model
  explanations
Clusters in Explanation Space: Inferring disease subtypes from model explanations
Marc-Andre Schulz
M. Chapman-Rounds
Manisha Verma
D. Bzdok
K. Georgatzis
12
2
0
18 Dec 2019
Automated Dependence Plots
Automated Dependence Plots
David I. Inouye
Liu Leqi
Joon Sik Kim
Bryon Aragam
Pradeep Ravikumar
12
1
0
02 Dec 2019
EMAP: Explanation by Minimal Adversarial Perturbation
EMAP: Explanation by Minimal Adversarial Perturbation
M. Chapman-Rounds
Marc-Andre Schulz
Erik Pazos
K. Georgatzis
AAML
FAtt
10
6
0
02 Dec 2019
DeepSmartFuzzer: Reward Guided Test Generation For Deep Learning
DeepSmartFuzzer: Reward Guided Test Generation For Deep Learning
Samet Demir
Hasan Ferit Eniser
A. Sen
AAML
11
28
0
24 Nov 2019
Rethinking Cooperative Rationalization: Introspective Extraction and
  Complement Control
Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control
Mo Yu
Shiyu Chang
Yang Zhang
Tommi Jaakkola
21
140
0
29 Oct 2019
A Game Theoretic Approach to Class-wise Selective Rationalization
A Game Theoretic Approach to Class-wise Selective Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
22
60
0
28 Oct 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAtt
CML
40
205
0
27 Oct 2019
Asymmetric Shapley values: incorporating causal knowledge into
  model-agnostic explainability
Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
Christopher Frye
C. Rowat
Ilya Feige
16
180
0
14 Oct 2019
Make Up Your Mind! Adversarial Generation of Inconsistent Natural
  Language Explanations
Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations
Oana-Maria Camburu
Brendan Shillingford
Pasquale Minervini
Thomas Lukasiewicz
Phil Blunsom
AAML
GAN
13
96
0
07 Oct 2019
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
AAML
29
60
0
04 Oct 2019
Explaining and Interpreting LSTMs
Explaining and Interpreting LSTMs
L. Arras
Jose A. Arjona-Medina
Michael Widrich
G. Montavon
Michael Gillhofer
K. Müller
Sepp Hochreiter
Wojciech Samek
FAtt
AI4TS
21
79
0
25 Sep 2019
How to Incorporate Monotonicity in Deep Networks While Preserving
  Flexibility?
How to Incorporate Monotonicity in Deep Networks While Preserving Flexibility?
Akhil Gupta
Naman Shukla
Lavanya Marla
A. Kolbeinsson
Kartik Yellepeddi
6
39
0
24 Sep 2019
BPMR: Bayesian Probabilistic Multivariate Ranking
BPMR: Bayesian Probabilistic Multivariate Ranking
Nan Wang
Hongning Wang
23
0
0
18 Sep 2019
Improving the Explainability of Neural Sentiment Classifiers via Data
  Augmentation
Improving the Explainability of Neural Sentiment Classifiers via Data Augmentation
Hanjie Chen
Yangfeng Ji
16
9
0
10 Sep 2019
TabNet: Attentive Interpretable Tabular Learning
TabNet: Attentive Interpretable Tabular Learning
Sercan Ö. Arik
Tomas Pfister
LMTD
55
1,289
0
20 Aug 2019
Neural Image Compression and Explanation
Neural Image Compression and Explanation
Xiang Li
Shihao Ji
12
10
0
09 Aug 2019
How model accuracy and explanation fidelity influence user trust
How model accuracy and explanation fidelity influence user trust
A. Papenmeier
G. Englebienne
C. Seifert
FaML
20
108
0
26 Jul 2019
Explaining an increase in predicted risk for clinical alerts
Explaining an increase in predicted risk for clinical alerts
Michaela Hardt
A. Rajkomar
Gerardo Flores
Andrew M. Dai
M. Howell
Greg S. Corrado
Claire Cui
Moritz Hardt
FAtt
17
12
0
10 Jul 2019
ASAC: Active Sensing using Actor-Critic models
ASAC: Active Sensing using Actor-Critic models
Jinsung Yoon
James Jordon
M. Schaar
CML
11
16
0
16 Jun 2019
Deep-gKnock: nonlinear group-feature selection with deep neural network
Deep-gKnock: nonlinear group-feature selection with deep neural network
G. Zhu
Tingting Zhao
28
13
0
24 May 2019
Evaluating Recurrent Neural Network Explanations
Evaluating Recurrent Neural Network Explanations
L. Arras
Ahmed Osman
K. Müller
Wojciech Samek
XAI
FAtt
24
88
0
26 Apr 2019
Interpreting Black Box Models via Hypothesis Testing
Interpreting Black Box Models via Hypothesis Testing
Collin Burns
Jesse Thomason
Wesley Tansey
FAtt
11
9
0
29 Mar 2019
GNNExplainer: Generating Explanations for Graph Neural Networks
GNNExplainer: Generating Explanations for Graph Neural Networks
Rex Ying
Dylan Bourgeois
Jiaxuan You
Marinka Zitnik
J. Leskovec
LLMAG
37
1,289
0
10 Mar 2019
What to Expect of Classifiers? Reasoning about Logistic Regression with
  Missing Features
What to Expect of Classifiers? Reasoning about Logistic Regression with Missing Features
Pasha Khosravi
Yitao Liang
YooJung Choi
Mathias Niepert
14
44
0
05 Mar 2019
Explaining a black-box using Deep Variational Information Bottleneck
  Approach
Explaining a black-box using Deep Variational Information Bottleneck Approach
Seo-Jin Bang
P. Xie
Heewook Lee
Wei Wu
Eric Xing
XAI
FAtt
22
75
0
19 Feb 2019
F-BLEAU: Fast Black-box Leakage Estimation
F-BLEAU: Fast Black-box Leakage Estimation
Giovanni Cherubin
K. Chatzikokolakis
C. Palamidessi
23
34
0
04 Feb 2019
Reparameterizable Subset Sampling via Continuous Relaxations
Reparameterizable Subset Sampling via Continuous Relaxations
Sang Michael Xie
Stefano Ermon
BDL
11
96
0
29 Jan 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
39
449
0
27 Jan 2019
Concrete Autoencoders for Differentiable Feature Selection and
  Reconstruction
Concrete Autoencoders for Differentiable Feature Selection and Reconstruction
Abubakar Abid
M. F. Balin
James Zou
SyDa
23
224
0
27 Jan 2019
On Network Science and Mutual Information for Explaining Deep Neural
  Networks
On Network Science and Mutual Information for Explaining Deep Neural Networks
Brian Davis
Umang Bhatt
Kartikeya Bhardwaj
R. Marculescu
J. M. F. Moura
FedML
SSL
FAtt
21
10
0
20 Jan 2019
Ten ways to fool the masses with machine learning
Ten ways to fool the masses with machine learning
F. Minhas
Amina Asif
Asa Ben-Hur
FedML
HAI
33
5
0
07 Jan 2019
Explaining Neural Networks Semantically and Quantitatively
Explaining Neural Networks Semantically and Quantitatively
Runjin Chen
Hao Chen
Ge Huang
Jie Ren
Quanshi Zhang
FAtt
23
54
0
18 Dec 2018
A Survey of Safety and Trustworthiness of Deep Neural Networks:
  Verification, Testing, Adversarial Attack and Defence, and Interpretability
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
Xiaowei Huang
Daniel Kroening
Wenjie Ruan
Marta Kwiatkowska
Youcheng Sun
Emese Thamo
Min Wu
Xinping Yi
AAML
24
50
0
18 Dec 2018
Learning to Explain with Complemental Examples
Learning to Explain with Complemental Examples
Atsushi Kanehira
Tatsuya Harada
12
40
0
04 Dec 2018
Multimodal Explanations by Predicting Counterfactuality in Videos
Multimodal Explanations by Predicting Counterfactuality in Videos
Atsushi Kanehira
Kentaro Takemoto
S. Inayoshi
Tatsuya Harada
26
35
0
04 Dec 2018
YASENN: Explaining Neural Networks via Partitioning Activation Sequences
YASENN: Explaining Neural Networks via Partitioning Activation Sequences
Yaroslav Zharov
Denis Korzhenkov
J. Lyu
Alexander Tuzhilin
FAtt
AAML
11
6
0
07 Nov 2018
Feature Selection using Stochastic Gates
Feature Selection using Stochastic Gates
Yutaro Yamada
Ofir Lindenbaum
S. Negahban
Y. Kluger
22
43
0
09 Oct 2018
What made you do this? Understanding black-box decisions with sufficient
  input subsets
What made you do this? Understanding black-box decisions with sufficient input subsets
Brandon Carter
Jonas W. Mueller
Siddhartha Jain
David K Gifford
FAtt
37
77
0
09 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
67
1,931
0
08 Oct 2018
Rule induction for global explanation of trained models
Rule induction for global explanation of trained models
Madhumita Sushil
Simon Suster
Walter Daelemans
FAtt
14
17
0
29 Aug 2018
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured
  Data
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
FAtt
TDI
14
213
0
08 Aug 2018
Computationally Efficient Measures of Internal Neuron Importance
Computationally Efficient Measures of Internal Neuron Importance
Avanti Shrikumar
Jocelin Su
A. Kundaje
FAtt
18
29
0
26 Jul 2018
Explaining Image Classifiers by Counterfactual Generation
Explaining Image Classifiers by Counterfactual Generation
C. Chang
Elliot Creager
Anna Goldenberg
David Duvenaud
VLM
11
265
0
20 Jul 2018
Optimal Piecewise Local-Linear Approximations
Optimal Piecewise Local-Linear Approximations
Kartik Ahuja
W. Zame
M. Schaar
FAtt
27
1
0
27 Jun 2018
Information Constraints on Auto-Encoding Variational Bayes
Information Constraints on Auto-Encoding Variational Bayes
Romain Lopez
Jeffrey Regier
Michael I. Jordan
N. Yosef
BDL
14
123
0
22 May 2018
Previous
1234567
Next