Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.06398
Cited By
On Computability, Learnability and Extractability of Finite State Machines from Recurrent Neural Networks
10 September 2020
Reda Marzouk
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On Computability, Learnability and Extractability of Finite State Machines from Recurrent Neural Networks"
10 / 10 papers shown
Title
Distance and Equivalence between Finite State Machines and Recurrent Neural Networks: Computational results
Reda Marzouk
C. D. L. Higuera
34
7
0
01 Apr 2020
Learning Finite State Representations of Recurrent Policy Networks
Anurag Koul
S. Greydanus
Alan Fern
41
88
0
29 Nov 2018
Implicit Regularization of Stochastic Gradient Descent in Natural Language Processing: Observations and Implications
Deren Lei
Zichen Sun
Yijun Xiao
William Yang Wang
119
14
0
01 Nov 2018
Stronger generalization bounds for deep nets via a compression approach
Sanjeev Arora
Rong Ge
Behnam Neyshabur
Yi Zhang
MLT
AI4CE
84
640
0
14 Feb 2018
Recurrent Neural Networks as Weighted Language Recognizers
Yining Chen
Sorcha Gilroy
A. Maletti
Jonathan May
Kevin Knight
51
77
0
15 Nov 2017
Interpretable Convolutional Neural Networks
Quanshi Zhang
Ying Nian Wu
Song-Chun Zhu
FAtt
64
780
0
02 Oct 2017
Implicit Regularization in Deep Learning
Behnam Neyshabur
50
146
0
06 Sep 2017
Linguistic Knowledge as Memory for Recurrent Neural Networks
Bhuwan Dhingra
Zhilin Yang
William W. Cohen
Ruslan Salakhutdinov
RALM
76
37
0
07 Mar 2017
Visualizing and Understanding Recurrent Networks
A. Karpathy
Justin Johnson
Li Fei-Fei
HAI
111
1,101
0
05 Jun 2015
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
563
15,874
0
12 Nov 2013
1