Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1606.05320
Cited By
Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models
16 June 2016
Viktoriya Krakovna
Finale Doshi-Velez
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models"
31 / 31 papers shown
Title
A Review of Multimodal Explainable Artificial Intelligence: Past, Present and Future
Shilin Sun
Wenbin An
Feng Tian
Fang Nan
Qidong Liu
Jing Liu
N. Shah
Ping Chen
104
3
0
18 Dec 2024
Generative learning for nonlinear dynamics
William Gilpin
AI4CE
PINN
65
27
0
07 Nov 2023
AI for Investment: A Platform Disruption
Mohammad Rasouli
Ravi Chiruvolu
Ali Risheh
32
3
0
06 Sep 2023
Hybrid hidden Markov LSTM for short-term traffic flow prediction
Agnimitra Sengupta
A. Das
S. I. Guler
BDL
AI4TS
24
2
0
11 Jul 2023
Weighted Automata Extraction and Explanation of Recurrent Neural Networks for Natural Language Tasks
Zeming Wei
Xiyue Zhang
Yihao Zhang
Meng Sun
21
10
0
24 Jun 2023
BTPK-based interpretable method for NER tasks based on Talmudic Public Announcement Logic
Yulin Chen
Beishui Liao
Bruno Bentzen
Bo Yuan
Zelai Yao
Haixiao Chi
D. Gabbay
4
0
0
24 Jan 2022
M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis
Xingbo Wang
Jianben He
Zhihua Jin
Muqiao Yang
Yong Wang
Huamin Qu
24
75
0
17 Jul 2021
Absolute Value Constraint: The Reason for Invalid Performance Evaluation Results of Neural Network Models for Stock Price Prediction
Yi Wei
24
1
0
10 Jan 2021
MEME: Generating RNN Model Explanations via Model Extraction
Dmitry Kazhdan
B. Dimanov
M. Jamnik
Pietro Lio
LRM
18
13
0
13 Dec 2020
Uncertainty Estimation and Calibration with Finite-State Probabilistic RNNs
Cheng Wang
Carolin (Haas) Lawrence
Mathias Niepert
UQCV
29
10
0
24 Nov 2020
Scaling Hidden Markov Language Models
Justin T. Chiu
Alexander M. Rush
BDL
27
25
0
09 Nov 2020
Towards Ground Truth Explainability on Tabular Data
Brian Barr
Ke Xu
Claudio Silva
E. Bertini
Robert Reilly
C. Bayan Bruss
J. Wittenbach
11
7
0
20 Jul 2020
AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity
S. Udrescu
A. Tan
Jiahai Feng
Orisvaldo Neto
Tailin Wu
Max Tegmark
25
185
0
18 Jun 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
52
371
0
30 Apr 2020
Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data
B. Shickel
Parisa Rashidi
AI4TS
33
17
0
27 Apr 2020
Intelligence, physics and information -- the tradeoff between accuracy and simplicity in machine learning
Tailin Wu
19
1
0
11 Jan 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
41
6,125
0
22 Oct 2019
Powering Hidden Markov Model by Neural Network based Generative Models
Dong Liu
Antoine Honoré
S. Chatterjee
L. Rasmussen
BDL
19
15
0
13 Oct 2019
Scalable Explanation of Inferences on Large Graphs
Chao Chen
Yuhang Liu
Xi Zhang
Sihong Xie
19
6
0
13 Aug 2019
Self-Attentive Hawkes Processes
Qiang Zhang
Aldo Lipani
Ömer Kirnap
Emine Yilmaz
AI4TS
28
45
0
17 Jul 2019
Improving the Performance of the LSTM and HMM Model via Hybridization
Larkin Liu
Yu-Chung Lin
Joshua Reid
30
9
0
09 Jul 2019
Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks
Joshua J. Michalenko
Ameesh Shah
Abhinav Verma
Richard G. Baraniuk
Swarat Chaudhuri
Ankit B. Patel
AI4CE
24
21
0
27 Feb 2019
An Evaluation of the Human-Interpretability of Explanation
Isaac Lage
Emily Chen
Jeffrey He
Menaka Narayanan
Been Kim
Sam Gershman
Finale Doshi-Velez
FAtt
XAI
26
151
0
31 Jan 2019
Evaluating the Ability of LSTMs to Learn Context-Free Grammars
Luzi Sennhauser
Robert C. Berwick
22
57
0
06 Nov 2018
Using Machine Learning Safely in Automotive Software: An Assessment and Adaption of Software Process Requirements in ISO 26262
Rick Salay
Krzysztof Czarnecki
25
69
0
05 Aug 2018
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections
Xin Zhang
Armando Solar-Lezama
Rishabh Singh
FAtt
27
63
0
21 Feb 2018
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
J. Uesato
Brendan O'Donoghue
Aaron van den Oord
Pushmeet Kohli
AAML
39
598
0
15 Feb 2018
Understanding Recurrent Neural State Using Memory Signatures
Skanda Koppula
K. Sim
K. K. Chin
33
2
0
11 Feb 2018
Fibres of Failure: Classifying errors in predictive processes
L. Carlsson
Gunnar Carlsson
Mikael Vejdemo-Johansson
AI4CE
33
4
0
09 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
58
241
0
02 Feb 2018
Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery
Scott Wisdom
Thomas Powers
J. Pitton
L. Atlas
25
36
0
22 Nov 2016
1