ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.05453
  4. Cited By
Beyond Word Importance: Contextual Decomposition to Extract Interactions
  from LSTMs

Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs

16 January 2018
W. James Murdoch
Peter J. Liu
Bin Yu
ArXivPDFHTML

Papers citing "Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs"

50 / 54 papers shown
Title
KernelSHAP-IQ: Weighted Least-Square Optimization for Shapley
  Interactions
KernelSHAP-IQ: Weighted Least-Square Optimization for Shapley Interactions
Fabian Fumagalli
Maximilian Muschalik
Patrick Kolpaczki
Eyke Hüllermeier
Barbara Hammer
43
6
0
17 May 2024
Interpretable Long-Form Legal Question Answering with
  Retrieval-Augmented Large Language Models
Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models
Antoine Louis
Gijs van Dijck
Gerasimos Spanakis
ELM
AILaw
30
35
0
29 Sep 2023
Single-Class Target-Specific Attack against Interpretable Deep Learning
  Systems
Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Eldor Abdukhamidov
Mohammed Abuhamad
George K. Thiruvathukal
Hyoungshick Kim
Tamer Abuhmed
AAML
27
2
0
12 Jul 2023
A semantically enhanced dual encoder for aspect sentiment triplet
  extraction
A semantically enhanced dual encoder for aspect sentiment triplet extraction
Baoxing Jiang
Shehui Liang
Peiyu Liu
Kaifang Dong
Hongye Li
27
15
0
14 Jun 2023
DEGREE: Decomposition Based Explanation For Graph Neural Networks
DEGREE: Decomposition Based Explanation For Graph Neural Networks
Qizhang Feng
Ninghao Liu
Fan Yang
Ruixiang Tang
Mengnan Du
Xia Hu
30
22
0
22 May 2023
Learning with Explanation Constraints
Learning with Explanation Constraints
Rattana Pukdee
Dylan Sam
J. Zico Kolter
Maria-Florina Balcan
Pradeep Ravikumar
FAtt
34
6
0
25 Mar 2023
Does a Neural Network Really Encode Symbolic Concepts?
Does a Neural Network Really Encode Symbolic Concepts?
Mingjie Li
Quanshi Zhang
29
30
0
25 Feb 2023
Relational Local Explanations
Relational Local Explanations
V. Borisov
Gjergji Kasneci
FAtt
22
0
0
23 Dec 2022
Generating Hierarchical Explanations on Text Classification Without
  Connecting Rules
Generating Hierarchical Explanations on Text Classification Without Connecting Rules
Yiming Ju
Yuanzhe Zhang
Kang Liu
Jun Zhao
FAtt
26
3
0
24 Oct 2022
Feature Importance for Time Series Data: Improving KernelSHAP
Feature Importance for Time Series Data: Improving KernelSHAP
M. Villani
J. Lockhart
Daniele Magazzeni
FAtt
AI4TS
40
6
0
05 Oct 2022
From Attribution Maps to Human-Understandable Explanations through
  Concept Relevance Propagation
From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation
Reduan Achtibat
Maximilian Dreyer
Ilona Eisenbraun
S. Bosse
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
FAtt
36
134
0
07 Jun 2022
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
Lijie Wang
Yaozong Shen
Shu-ping Peng
Shuai Zhang
Xinyan Xiao
Hao Liu
Hongxuan Tang
Ying-Cong Chen
Hua Wu
Haifeng Wang
ELM
19
21
0
23 May 2022
FaiRR: Faithful and Robust Deductive Reasoning over Natural Language
FaiRR: Faithful and Robust Deductive Reasoning over Natural Language
Soumya Sanyal
Harman Singh
Xiang Ren
ReLM
LRM
32
45
0
19 Mar 2022
Right for the Right Latent Factors: Debiasing Generative Models via
  Disentanglement
Right for the Right Latent Factors: Debiasing Generative Models via Disentanglement
Xiaoting Shao
Karl Stelzner
Kristian Kersting
CML
DRL
29
3
0
01 Feb 2022
Explainable Deep Learning in Healthcare: A Methodological Survey from an
  Attribution View
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
44
55
0
05 Dec 2021
Machine Learning for Multimodal Electronic Health Records-based
  Research: Challenges and Perspectives
Machine Learning for Multimodal Electronic Health Records-based Research: Challenges and Perspectives
Ziyi Liu
Jiaqi Zhang
Yongshuai Hou
Xinran Zhang
Ge Li
Yang Xiang
19
14
0
09 Nov 2021
Interpreting Deep Learning Models in Natural Language Processing: A
  Review
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
19
45
0
20 Oct 2021
Discretized Integrated Gradients for Explaining Language Models
Discretized Integrated Gradients for Explaining Language Models
Soumya Sanyal
Xiang Ren
FAtt
17
53
0
31 Aug 2021
Neuron-level Interpretation of Deep NLP Models: A Survey
Neuron-level Interpretation of Deep NLP Models: A Survey
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILM
AI4CE
35
82
0
30 Aug 2021
Shapley Explanation Networks
Shapley Explanation Networks
Rui Wang
Xiaoqian Wang
David I. Inouye
TDI
FAtt
27
44
0
06 Apr 2021
A Unified Game-Theoretic Interpretation of Adversarial Robustness
A Unified Game-Theoretic Interpretation of Adversarial Robustness
Jie Ren
Die Zhang
Yisen Wang
Lu Chen
Zhanpeng Zhou
...
Xu Cheng
Xin Eric Wang
Meng Zhou
Jie Shi
Quanshi Zhang
AAML
72
22
0
12 Mar 2021
On the Post-hoc Explainability of Deep Echo State Networks for Time
  Series Forecasting, Image and Video Classification
On the Post-hoc Explainability of Deep Echo State Networks for Time Series Forecasting, Image and Video Classification
Alejandro Barredo Arrieta
S. Gil-Lopez
I. Laña
Miren Nekane Bilbao
Javier Del Ser
AI4TS
41
13
0
17 Feb 2021
Self-Explaining Structures Improve NLP Models
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
MILM
XAI
LRM
FAtt
46
38
0
03 Dec 2020
Right for the Right Concept: Revising Neuro-Symbolic Concepts by
  Interacting with their Explanations
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Wolfgang Stammer
P. Schramowski
Kristian Kersting
FAtt
14
107
0
25 Nov 2020
Generating Plausible Counterfactual Explanations for Deep Transformers
  in Financial Text Classification
Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification
Linyi Yang
Eoin M. Kenny
T. L. J. Ng
Yi Yang
Barry Smyth
Ruihai Dong
15
70
0
23 Oct 2020
A Unified Approach to Interpreting and Boosting Adversarial
  Transferability
A Unified Approach to Interpreting and Boosting Adversarial Transferability
Xin Eric Wang
Jie Ren
Shuyu Lin
Xiangming Zhu
Yisen Wang
Quanshi Zhang
AAML
29
94
0
08 Oct 2020
Learning Variational Word Masks to Improve the Interpretability of
  Neural Text Classifiers
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAML
VLM
15
63
0
01 Oct 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge
  Masking
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
M. Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
36
214
0
01 Oct 2020
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest
  Feature Importance
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance
Mattia Carletti
M. Terzi
Gian Antonio Susto
36
42
0
21 Jul 2020
Feature Interaction Interpretability: A Case for Explaining
  Ad-Recommendation Systems via Neural Interaction Detection
Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection
Michael Tsang
Dehua Cheng
Hanpeng Liu
Xuening Feng
Eric Zhou
Yan Liu
FAtt
24
60
0
19 Jun 2020
Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Brendan Kennedy
Xisen Jin
Aida Mostafazadeh Davani
Morteza Dehghani
Xiang Ren
22
138
0
05 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
43
371
0
30 Apr 2020
Sequential Interpretability: Methods, Applications, and Future Direction
  for Understanding Deep Learning Models in the Context of Sequential Data
Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data
B. Shickel
Parisa Rashidi
AI4TS
33
17
0
27 Apr 2020
How recurrent networks implement contextual processing in sentiment
  analysis
How recurrent networks implement contextual processing in sentiment analysis
Niru Maheswaranathan
David Sussillo
22
22
0
17 Apr 2020
Generating Hierarchical Explanations on Text Classification via Feature
  Interaction Detection
Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Hanjie Chen
Guangtao Zheng
Yangfeng Ji
FAtt
36
92
0
04 Apr 2020
Machine Learning in Python: Main developments and technology trends in
  data science, machine learning, and artificial intelligence
Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence
S. Raschka
Joshua Patterson
Corey J. Nolet
AI4CE
29
485
0
12 Feb 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
30
143
0
10 Feb 2020
Explaining and Interpreting LSTMs
Explaining and Interpreting LSTMs
L. Arras
Jose A. Arjona-Medina
Michael Widrich
G. Montavon
Michael Gillhofer
K. Müller
Sepp Hochreiter
Wojciech Samek
FAtt
AI4TS
21
79
0
25 Sep 2019
Analysing Neural Language Models: Contextual Decomposition Reveals
  Default Reasoning in Number and Gender Assignment
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment
Jaap Jumelet
Willem H. Zuidema
Dieuwke Hupkes
LRM
33
37
0
19 Sep 2019
Interpretable and Steerable Sequence Learning via Prototypes
Interpretable and Steerable Sequence Learning via Prototypes
Yao Ming
Panpan Xu
Huamin Qu
Liu Ren
AI4TS
12
138
0
23 Jul 2019
Reverse engineering recurrent networks for sentiment classification
  reveals line attractor dynamics
Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics
Niru Maheswaranathan
Alex H. Williams
Matthew D. Golub
Surya Ganguli
David Sussillo
26
78
0
25 Jun 2019
Incorporating Priors with Feature Attribution on Text Classification
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAtt
FaML
33
120
0
19 Jun 2019
Exploring Interpretable LSTM Neural Networks over Multi-Variable Data
Exploring Interpretable LSTM Neural Networks over Multi-Variable Data
Tian Guo
Tao R. Lin
Nino Antulov-Fantulin
AI4TS
26
154
0
28 May 2019
Disentangled Attribution Curves for Interpreting Random Forests and
  Boosted Trees
Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Summer Devlin
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
19
14
0
18 May 2019
Veridical Data Science
Veridical Data Science
Bin Yu
Karl Kumbier
23
162
0
23 Jan 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin-Xia Yu
XAI
HAI
49
1,421
0
14 Jan 2019
Can I trust you more? Model-Agnostic Hierarchical Explanations
Can I trust you more? Model-Agnostic Hierarchical Explanations
Michael Tsang
Youbang Sun
Dongxu Ren
Yan Liu
FAtt
16
25
0
12 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
30
169
0
03 Dec 2018
What made you do this? Understanding black-box decisions with sufficient
  input subsets
What made you do this? Understanding black-box decisions with sufficient input subsets
Brandon Carter
Jonas W. Mueller
Siddhartha Jain
David K Gifford
FAtt
39
77
0
09 Oct 2018
Interpreting Neural Networks With Nearest Neighbors
Interpreting Neural Networks With Nearest Neighbors
Eric Wallace
Shi Feng
Jordan L. Boyd-Graber
AAML
FAtt
MILM
20
53
0
08 Sep 2018
12
Next