Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1611.01368
Cited By
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
4 November 2016
Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies"
50 / 496 papers shown
Title
How much complexity does an RNN architecture need to learn syntax-sensitive dependencies?
Gantavya Bhatt
Hritik Bansal
Rishu Singh
Sumeet Agarwal
10
5
0
17 May 2020
Encodings of Source Syntax: Similarities in NMT Representations Across Target Languages
Tyler A. Chang
Anna N. Rafferty
30
2
0
17 May 2020
On the Robustness of Language Encoders against Grammatical Errors
Fan Yin
Quanyu Long
Tao Meng
Kai-Wei Chang
39
34
0
12 May 2020
A Systematic Assessment of Syntactic Generalization in Neural Language Models
Jennifer Hu
Jon Gauthier
Peng Qian
Ethan Gotlieb Wilcox
R. Levy
ELM
35
212
0
07 May 2020
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka
Allyson Ettinger
56
42
0
04 May 2020
The Sensitivity of Language Models and Humans to Winograd Schema Perturbations
Mostafa Abdou
Vinit Ravishankar
Maria Barrett
Yonatan Belinkov
Desmond Elliott
Anders Søgaard
ReLM
LRM
62
34
0
04 May 2020
From SPMRL to NMRL: What Did We Learn (and Unlearn) in a Decade of Parsing Morphologically-Rich Languages (MRLs)?
Reut Tsarfaty
Dan Bareket
Stav Klein
Amit Seker
26
39
0
04 May 2020
Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM Language Models
Kaiji Lu
Piotr (Peter) Mardziel
Klas Leino
Matt Fredrikson
Anupam Datta
21
10
0
03 May 2020
Emergence of Syntax Needs Minimal Supervision
Raphaël Bailly
Kata Gábor
36
5
0
03 May 2020
A Two-Stage Masked LM Method for Term Set Expansion
Guy Kushilevitz
Shaul Markovitch
Yoav Goldberg
6
11
0
03 May 2020
Quantifying Attention Flow in Transformers
Samira Abnar
Willem H. Zuidema
62
779
0
02 May 2020
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
Aaron Mueller
Garrett Nicolai
Panayiota Petrou-Zeniou
N. Talmina
Tal Linzen
22
55
0
01 May 2020
Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment
Forrest Davis
Marten van Schijndel
26
23
0
01 May 2020
Attribution Analysis of Grammatical Dependencies in LSTMs
Sophie Hao
6
3
0
30 Apr 2020
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs
Michael A. Lepori
Tal Linzen
R. Thomas McCoy
NAI
18
11
0
30 Apr 2020
Does Data Augmentation Improve Generalization in NLP?
Rohan Jha
Charles Lovering
Ellie Pavlick
25
10
0
30 Apr 2020
Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?
Hitomi Yanaka
K. Mineshima
D. Bekki
Kentaro Inui
NAI
21
51
0
30 Apr 2020
Learning Music Helps You Read: Using Transfer to Study Linguistic Structure in Language Models
Isabel Papadimitriou
Dan Jurafsky
32
9
0
30 Apr 2020
Hierarchical Encoders for Modeling and Interpreting Screenplays
G. Bhat
Avneesh Singh Saluja
Melody Dye
Jan Florjanczyk
12
5
0
30 Apr 2020
Quantifying the Contextualization of Word Representations with Semantic Class Probing
Mengjie Zhao
Philipp Dufter
Yadollah Yaghoobzadeh
Hinrich Schütze
25
27
0
25 Apr 2020
Syntactic Structure from Deep Learning
Tal Linzen
Marco Baroni
NAI
25
180
0
22 Apr 2020
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
Shauli Ravfogel
Yanai Elazar
Hila Gonen
Michael Twiton
Yoav Goldberg
44
370
0
16 Apr 2020
On the Linguistic Capacity of Real-Time Counter Automata
William Merrill
11
22
0
15 Apr 2020
Joint translation and unit conversion for end-to-end localization
Georgiana Dinu
Prashant Mathur
Marcello Federico
Stanislas Lauly
Yaser Al-Onaizan
23
3
0
10 Apr 2020
Overestimation of Syntactic Representationin Neural Language Models
Jordan Kodner
Nitish Gupta
26
12
0
10 Apr 2020
Dependency-Based Neural Representations for Classifying Lines of Programs
Shashank Srikant
Nicolas Lesimple
Una-May O’Reilly
20
2
0
08 Apr 2020
Frequency, Acceptability, and Selection: A case study of clause-embedding
Aaron Steven White
Kyle Rawlins
25
12
0
08 Apr 2020
A Systematic Analysis of Morphological Content in BERT Models for Multiple Languages
Daniel Edmiston
28
32
0
06 Apr 2020
An Analysis of the Utility of Explicit Negative Examples to Improve the Syntactic Abilities of Neural Language Models
Hiroshi Noji
Hiroya Takamura
27
14
0
06 Apr 2020
Understanding Cross-Lingual Syntactic Transfer in Multilingual Recurrent Neural Networks
Prajit Dhar
Arianna Bisazza
16
10
0
31 Mar 2020
Information-Theoretic Probing with Minimum Description Length
Elena Voita
Ivan Titov
23
271
0
27 Mar 2020
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation
Alessandro Raganato
Yves Scherrer
Jörg Tiedemann
32
92
0
24 Feb 2020
On the impressive performance of randomly weighted encoders in summarization tasks
Jonathan Pilault
Jaehong Park
C. Pal
BDL
33
5
0
21 Feb 2020
Tree-structured Attention with Hierarchical Accumulation
Xuan-Phi Nguyen
Chenyu You
Guosheng Lin
R. Socher
12
76
0
19 Feb 2020
Assessing the Memory Ability of Recurrent Neural Networks
Cheng Zhang
Qiuchi Li
L. Hua
D. Song
11
6
0
18 Feb 2020
Parsing as Pretraining
David Vilares
Michalina Strzyz
Anders Søgaard
Carlos Gómez-Rodríguez
43
31
0
05 Feb 2020
Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks
R. Thomas McCoy
Robert Frank
Tal Linzen
25
106
0
10 Jan 2020
oLMpics -- On what Language Model Pre-training Captures
Alon Talmor
Yanai Elazar
Yoav Goldberg
Jonathan Berant
LRM
34
300
0
31 Dec 2019
BLiMP: The Benchmark of Linguistic Minimal Pairs for English
Alex Warstadt
Alicia Parrish
Haokun Liu
Anhad Mohananey
Wei Peng
Sheng-Fu Wang
Samuel R. Bowman
18
469
0
02 Dec 2019
Neural language modeling of free word order argument structure
Charlotte Rochereau
Benoît Sagot
Emmanuel Dupoux
14
0
0
30 Nov 2019
How Can We Know What Language Models Know?
Zhengbao Jiang
Frank F. Xu
Jun Araki
Graham Neubig
KELM
47
1,374
0
28 Nov 2019
An Annotated Corpus of Reference Resolution for Interpreting Common Grounding
Takuma Udagawa
Akiko Aizawa
16
9
0
18 Nov 2019
Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding
Dhanasekar Sundararaman
Vivek Subramanian
Guoyin Wang
Shijing Si
Dinghan Shen
Dong Wang
Lawrence Carin
19
40
0
10 Nov 2019
Harnessing the linguistic signal to predict scalar inferences
Sebastian Schuster
Yuxing Chen
Judith Degen
21
33
0
31 Oct 2019
A memory enhanced LSTM for modeling complex temporal dependencies
Sneha Aenugu
6
0
0
25 Oct 2019
Exploring Multilingual Syntactic Sentence Representations
Chen Cecilia Liu
Anderson de Andrade
Muhammad Osama
23
4
0
25 Oct 2019
Discovering the Compositional Structure of Vector Representations with Role Learning Networks
Paul Soulos
R. Thomas McCoy
Tal Linzen
P. Smolensky
CoGe
29
43
0
21 Oct 2019
Whatcha lookin' at? DeepLIFTing BERT's Attention in Question Answering
Ekaterina Arkhangelskaia
Sourav Dutta
AIMat
28
9
0
14 Oct 2019
Compositional Generalization for Primitive Substitutions
Yuanpeng Li
Liang Zhao
Jianyu Wang
Joel Hestness
19
86
0
07 Oct 2019
Specializing Word Embeddings (for Parsing) by Information Bottleneck
Xiang Lisa Li
Jason Eisner
28
65
0
01 Oct 2019
Previous
1
2
3
...
10
6
7
8
9
Next