ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.07814
  4. Cited By
Learning to Explain: An Information-Theoretic Perspective on Model
  Interpretation

Learning to Explain: An Information-Theoretic Perspective on Model Interpretation

21 February 2018
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
    MLT
    FAtt
ArXivPDFHTML

Papers citing "Learning to Explain: An Information-Theoretic Perspective on Model Interpretation"

50 / 302 papers shown
Title
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
144
665
0
28 Dec 2020
Get It Scored Using AutoSAS -- An Automated System for Scoring Short
  Answers
Get It Scored Using AutoSAS -- An Automated System for Scoring Short Answers
Yaman Kumar Singla
Swati Aggarwal
Debanjan Mahata
R. Shah
Ponnurangam Kumaraguru
Roger Zimmermann
26
56
0
21 Dec 2020
Learning from the Best: Rationalizing Prediction by Adversarial
  Information Calibration
Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration
Lei Sha
Oana-Maria Camburu
Thomas Lukasiewicz
133
35
0
16 Dec 2020
E2E-FS: An End-to-End Feature Selection Method for Neural Networks
E2E-FS: An End-to-End Feature Selection Method for Neural Networks
Brais Cancela
V. Bolón-Canedo
Amparo Alonso-Betanzos
18
9
0
14 Dec 2020
Synthetic Data: Opening the data floodgates to enable faster, more
  directed development of machine learning methods
Synthetic Data: Opening the data floodgates to enable faster, more directed development of machine learning methods
James Jordon
A. Wilson
M. Schaar
AI4CE
87
16
0
08 Dec 2020
Challenging common interpretability assumptions in feature attribution
  explanations
Challenging common interpretability assumptions in feature attribution explanations
Jonathan Dinu
Jeffrey P. Bigham
J. Z. K. Unaffiliated
16
14
0
04 Dec 2020
Explainable Multivariate Time Series Classification: A Deep Neural
  Network Which Learns To Attend To Important Variables As Well As Informative
  Time Intervals
Explainable Multivariate Time Series Classification: A Deep Neural Network Which Learns To Attend To Important Variables As Well As Informative Time Intervals
Tsung-Yu Hsieh
Suhang Wang
Yiwei Sun
Vasant Honavar
BDL
AI4TS
FAtt
18
9
0
23 Nov 2020
Interpretable Visual Reasoning via Induced Symbolic Space
Interpretable Visual Reasoning via Induced Symbolic Space
Zhonghao Wang
Kai Wang
Mo Yu
Jinjun Xiong
Wen-mei W. Hwu
M. Hasegawa-Johnson
Humphrey Shi
LRM
OCL
16
19
0
23 Nov 2020
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
48
243
0
21 Nov 2020
Data Representing Ground-Truth Explanations to Evaluate XAI Methods
Data Representing Ground-Truth Explanations to Evaluate XAI Methods
S. Amiri
Rosina O. Weber
Prateek Goel
Owen Brooks
Archer Gandley
Brian Kitchell
Aaron Zehm
XAI
43
8
0
18 Nov 2020
Learning outside the Black-Box: The pursuit of interpretable models
Learning outside the Black-Box: The pursuit of interpretable models
Jonathan Crabbé
Yao Zhang
W. Zame
M. Schaar
6
24
0
17 Nov 2020
Parameterized Explainer for Graph Neural Network
Parameterized Explainer for Graph Neural Network
Dongsheng Luo
Wei Cheng
Dongkuan Xu
Wenchao Yu
Bo Zong
Haifeng Chen
Xiang Zhang
53
542
0
09 Nov 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
33
33
0
06 Nov 2020
MAIRE -- A Model-Agnostic Interpretable Rule Extraction Procedure for
  Explaining Classifiers
MAIRE -- A Model-Agnostic Interpretable Rule Extraction Procedure for Explaining Classifiers
Rajat Sharma
N. Reddy
V. Kamakshi
N. C. Krishnan
Shweta Jain
FAtt
27
7
0
03 Nov 2020
Bayesian Importance of Features (BIF)
Bayesian Importance of Features (BIF)
Kamil Adamczewski
Frederik Harder
Mijung Park
FAtt
15
2
0
26 Oct 2020
A Framework to Learn with Interpretation
A Framework to Learn with Interpretation
Jayneel Parekh
Pavlo Mozharovskyi
Florence dÁlché-Buc
AI4CE
FAtt
25
30
0
19 Oct 2020
Human-interpretable model explainability on high-dimensional data
Human-interpretable model explainability on high-dimensional data
Damien de Mijolla
Christopher Frye
M. Kunesch
J. Mansir
Ilya Feige
FAtt
25
8
0
14 Oct 2020
Learning Propagation Rules for Attribution Map Generation
Learning Propagation Rules for Attribution Map Generation
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
FAtt
38
17
0
14 Oct 2020
Explain2Attack: Text Adversarial Attacks via Cross-Domain
  Interpretability
Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability
M. Hossam
Trung Le
He Zhao
Dinh Q. Phung
SILM
AAML
13
6
0
14 Oct 2020
Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework
  for Refining Arbitrary Dense Adversarial Attacks
Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework for Refining Arbitrary Dense Adversarial Attacks
He Zhao
Thanh-Tuan Nguyen
Trung Le
Paul Montague
O. Vel
Tamas Abraham
Dinh Q. Phung
AAML
24
2
0
13 Oct 2020
Explaining Deep Neural Networks
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
33
26
0
04 Oct 2020
Learning Variational Word Masks to Improve the Interpretability of
  Neural Text Classifiers
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAML
VLM
15
63
0
01 Oct 2020
Information-Theoretic Visual Explanation for Black-Box Classifiers
Information-Theoretic Visual Explanation for Black-Box Classifiers
Jihun Yi
Eunji Kim
Siwon Kim
Sungroh Yoon
FAtt
25
6
0
23 Sep 2020
The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
  Sufficient Subsets
The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
23
23
0
23 Sep 2020
Explainable Empirical Risk Minimization
Explainable Empirical Risk Minimization
Linli Zhang
Georgios Karakasidis
Arina Odnoblyudova
Leyla Dogruel
Alex Jung
27
5
0
03 Sep 2020
ALEX: Active Learning based Enhancement of a Model's Explainability
ALEX: Active Learning based Enhancement of a Model's Explainability
Ishani Mondal
Debasis Ganguly
9
2
0
02 Sep 2020
MED-TEX: Transferring and Explaining Knowledge with Less Data from
  Pretrained Medical Imaging Models
MED-TEX: Transferring and Explaining Knowledge with Less Data from Pretrained Medical Imaging Models
Thanh Nguyen-Duc
He Zhao
Jianfei Cai
Dinh Q. Phung
VLM
MedIm
33
4
0
06 Aug 2020
When is invariance useful in an Out-of-Distribution Generalization
  problem ?
When is invariance useful in an Out-of-Distribution Generalization problem ?
Masanori Koyama
Shoichiro Yamaguchi
OOD
34
65
0
04 Aug 2020
A Causal Lens for Peeking into Black Box Predictive Models: Predictive
  Model Interpretation via Causal Attribution
A Causal Lens for Peeking into Black Box Predictive Models: Predictive Model Interpretation via Causal Attribution
A. Khademi
Vasant Honavar
CML
20
9
0
01 Aug 2020
Gaussian Process Regression with Local Explanation
Gaussian Process Regression with Local Explanation
Yuya Yoshikawa
Tomoharu Iwata
FAtt
13
18
0
03 Jul 2020
Interpreting and Disentangling Feature Components of Various Complexity
  from DNNs
Interpreting and Disentangling Feature Components of Various Complexity from DNNs
Jie Ren
Mingjie Li
Zexu Liu
Quanshi Zhang
CoGe
19
18
0
29 Jun 2020
Set Based Stochastic Subsampling
Set Based Stochastic Subsampling
Bruno Andreis
Seanie Lee
A. Nguyen
Juho Lee
Eunho Yang
Sung Ju Hwang
BDL
14
0
0
25 Jun 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
30
73
0
24 Jun 2020
How does this interaction affect me? Interpretable attribution for
  feature interactions
How does this interaction affect me? Interpretable attribution for feature interactions
Michael Tsang
Sirisha Rambhatla
Yan Liu
FAtt
22
85
0
19 Jun 2020
Gradient Estimation with Stochastic Softmax Tricks
Gradient Estimation with Stochastic Softmax Tricks
Max B. Paulus
Dami Choi
Daniel Tarlow
Andreas Krause
Chris J. Maddison
BDL
38
85
0
15 Jun 2020
Explaining Predictions by Approximating the Local Decision Boundary
Explaining Predictions by Approximating the Local Decision Boundary
G. Vlassopoulos
T. Erven
Henry Brighton
Vlado Menkovski
FAtt
25
8
0
14 Jun 2020
DNF-Net: A Neural Architecture for Tabular Data
DNF-Net: A Neural Architecture for Tabular Data
A. Abutbul
G. Elidan
L. Katzir
Ran El-Yaniv
LMTD
AI4CE
24
29
0
11 Jun 2020
Why Attentions May Not Be Interpretable?
Why Attentions May Not Be Interpretable?
Bing Bai
Jian Liang
Guanhua Zhang
Hao Li
Kun Bai
Fei Wang
FAtt
25
56
0
10 Jun 2020
Neural Methods for Point-wise Dependency Estimation
Neural Methods for Point-wise Dependency Estimation
Yao-Hung Hubert Tsai
Han Zhao
M. Yamada
Louis-Philippe Morency
Ruslan Salakhutdinov
33
31
0
09 Jun 2020
Adversarial Infidelity Learning for Model Interpretation
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei Wang
AAML
54
18
0
09 Jun 2020
Aligning Faithful Interpretations with their Social Attribution
Aligning Faithful Interpretations with their Social Attribution
Alon Jacovi
Yoav Goldberg
23
105
0
01 Jun 2020
Rationalizing Text Matching: Learning Sparse Alignments via Optimal
  Transport
Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport
Kyle Swanson
L. Yu
Tao Lei
OT
29
37
0
27 May 2020
NILE : Natural Language Inference with Faithful Natural Language
  Explanations
NILE : Natural Language Inference with Faithful Natural Language Explanations
Sawan Kumar
Partha P. Talukdar
XAI
LRM
19
160
0
25 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
33
219
0
01 May 2020
How do Decisions Emerge across Layers in Neural Models? Interpretation
  with Differentiable Masking
How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking
Nicola De Cao
M. Schlichtkrull
Wilker Aziz
Ivan Titov
25
90
0
30 Apr 2020
Multimodal Routing: Improving Local and Global Interpretability of
  Multimodal Language Analysis
Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language Analysis
Yao-Hung Hubert Tsai
Martin Q. Ma
Muqiao Yang
Ruslan Salakhutdinov
Louis-Philippe Morency
12
4
0
29 Apr 2020
Invariant Rationalization
Invariant Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
202
201
0
22 Mar 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAI
AAML
21
150
0
16 Mar 2020
Neural Generators of Sparse Local Linear Models for Achieving both
  Accuracy and Interpretability
Neural Generators of Sparse Local Linear Models for Achieving both Accuracy and Interpretability
Yuya Yoshikawa
Tomoharu Iwata
16
7
0
13 Mar 2020
Explaining Knowledge Distillation by Quantifying the Knowledge
Explaining Knowledge Distillation by Quantifying the Knowledge
Xu Cheng
Zhefan Rao
Yilan Chen
Quanshi Zhang
18
119
0
07 Mar 2020
Previous
1234567
Next