ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.05453
  4. Cited By
Beyond Word Importance: Contextual Decomposition to Extract Interactions
  from LSTMs
v1v2 (latest)

Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs

16 January 2018
W. James Murdoch
Peter J. Liu
Bin Yu
ArXiv (abs)PDFHTML

Papers citing "Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs"

25 / 125 papers shown
Title
Reverse engineering recurrent networks for sentiment classification
  reveals line attractor dynamics
Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics
Niru Maheswaranathan
Alex H. Williams
Matthew D. Golub
Surya Ganguli
David Sussillo
81
83
0
25 Jun 2019
Incorporating Priors with Feature Attribution on Text Classification
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAttFaML
111
120
0
19 Jun 2019
Exploring Interpretable LSTM Neural Networks over Multi-Variable Data
Exploring Interpretable LSTM Neural Networks over Multi-Variable Data
Tian Guo
Tao R. Lin
Nino Antulov-Fantulin
AI4TS
94
156
0
28 May 2019
Disentangled Attribution Curves for Interpreting Random Forests and
  Boosted Trees
Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Summer Devlin
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
62
14
0
18 May 2019
Evaluating Recurrent Neural Network Explanations
Evaluating Recurrent Neural Network Explanations
L. Arras
Ahmed Osman
K. Müller
Wojciech Samek
XAIFAtt
117
88
0
26 Apr 2019
On Attribution of Recurrent Neural Network Predictions via Additive
  Decomposition
On Attribution of Recurrent Neural Network Predictions via Additive Decomposition
Mengnan Du
Ninghao Liu
Fan Yang
Shuiwang Ji
Helen Zhou
FAtt
71
51
0
27 Mar 2019
NeuralHydrology -- Interpreting LSTMs in Hydrology
NeuralHydrology -- Interpreting LSTMs in Hydrology
Frederik Kratzert
M. Herrnegger
D. Klotz
Sepp Hochreiter
Günter Klambauer
60
86
0
19 Mar 2019
Explaining a black-box using Deep Variational Information Bottleneck
  Approach
Explaining a black-box using Deep Variational Information Bottleneck Approach
Seo-Jin Bang
P. Xie
Heewook Lee
Wei Wu
Eric Xing
XAIFAtt
77
77
0
19 Feb 2019
Veridical Data Science
Veridical Data Science
Bin Yu
Karl Kumbier
104
170
0
23 Jan 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin Yu
XAIHAI
211
1,459
0
14 Jan 2019
Analysis Methods in Neural Language Processing: A Survey
Analysis Methods in Neural Language Processing: A Survey
Yonatan Belinkov
James R. Glass
127
558
0
21 Dec 2018
Can I trust you more? Model-Agnostic Hierarchical Explanations
Can I trust you more? Model-Agnostic Hierarchical Explanations
Michael Tsang
Youbang Sun
Dongxu Ren
Yan Liu
FAtt
53
26
0
12 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAMLAI4CE
138
174
0
03 Dec 2018
What can AI do for me: Evaluating Machine Learning Interpretations in
  Cooperative Play
What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play
Shi Feng
Jordan L. Boyd-Graber
HAI
82
130
0
23 Oct 2018
What made you do this? Understanding black-box decisions with sufficient
  input subsets
What made you do this? Understanding black-box decisions with sufficient input subsets
Brandon Carter
Jonas W. Mueller
Siddhartha Jain
David K Gifford
FAtt
90
79
0
09 Oct 2018
Improving Moderation of Online Discussions via Interpretable Neural
  Models
Improving Moderation of Online Discussions via Interpretable Neural Models
Andrej Svec
Matúš Pikuliak
Marian Simko
Maria Bielikova
40
20
0
18 Sep 2018
Interpreting Neural Networks With Nearest Neighbors
Interpreting Neural Networks With Nearest Neighbors
Eric Wallace
Shi Feng
Jordan L. Boyd-Graber
AAMLFAttMILM
140
54
0
08 Sep 2018
Explaining Character-Aware Neural Networks for Word-Level Prediction: Do
  They Discover Linguistic Rules?
Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?
Fréderic Godin
Kris Demuynck
J. Dambre
W. D. Neve
T. Demeester
AI4CE
82
17
0
28 Aug 2018
Dissecting Contextual Word Embeddings: Architecture and Representation
Dissecting Contextual Word Embeddings: Architecture and Representation
Matthew E. Peters
Mark Neumann
Luke Zettlemoyer
Wen-tau Yih
113
434
0
27 Aug 2018
Hierarchical interpretations for neural network predictions
Hierarchical interpretations for neural network predictions
Chandan Singh
W. James Murdoch
Bin Yu
84
146
0
14 Jun 2018
Pathologies of Neural Models Make Interpretations Difficult
Pathologies of Neural Models Make Interpretations Difficult
Shi Feng
Eric Wallace
Alvin Grissom II
Mohit Iyyer
Pedro Rodriguez
Jordan L. Boyd-Graber
AAMLFAtt
110
322
0
20 Apr 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
116
220
0
20 Mar 2018
Learning Memory Access Patterns
Learning Memory Access Patterns
Milad Hashemi
Kevin Swersky
Jamie A. Smith
Grant Ayers
Heiner Litz
Jichuan Chang
Christos Kozyrakis
Parthasarathy Ranganathan
55
207
0
06 Mar 2018
A Comparative Study of Rule Extraction for Recurrent Neural Networks
A Comparative Study of Rule Extraction for Recurrent Neural Networks
Qinglong Wang
Kaixuan Zhang
Alexander Ororbia
Masashi Sugiyama
Xue Liu
C. Lee Giles
86
11
0
16 Jan 2018
Detecting Statistical Interactions from Neural Network Weights
Detecting Statistical Interactions from Neural Network Weights
Michael Tsang
Dehua Cheng
Yan Liu
99
193
0
14 May 2017
Previous
123