Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1611.01368
Cited By
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
4 November 2016
Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies"
50 / 496 papers shown
Title
CLiMP: A Benchmark for Chinese Language Model Evaluation
Beilei Xiang
Changbing Yang
Yu Li
Alex Warstadt
Katharina Kann
ALM
25
38
0
26 Jan 2021
Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT
Isabel Papadimitriou
Ethan A. Chi
Richard Futrell
Kyle Mahowald
27
44
0
26 Jan 2021
Exploring Transitivity in Neural NLI Models through Veridicality
Hitomi Yanaka
K. Mineshima
Kentaro Inui
28
23
0
26 Jan 2021
Coloring the Black Box: What Synesthesia Tells Us about Character Embeddings
Katharina Kann
Mauro M. Monsalve-Mercado
28
2
0
26 Jan 2021
Evaluating Models of Robust Word Recognition with Serial Reproduction
Stephan C. Meylan
Sathvik Nair
Thomas Griffiths
22
4
0
24 Jan 2021
Can RNNs learn Recursive Nested Subject-Verb Agreements?
Yair Lakretz
T. Desbordes
J. King
Benoît Crabbé
Maxime Oquab
S. Dehaene
160
19
0
06 Jan 2021
Recoding latent sentence representations -- Dynamic gradient-based activation modification in RNNs
Dennis Ulmer
38
0
0
03 Jan 2021
Towards a Universal Continuous Knowledge Base
Gang Chen
Maosong Sun
Yang Liu
28
3
0
25 Dec 2020
Mapping the Timescale Organization of Neural Language Models
H. Chien
Jinhang Zhang
C. Honey
11
2
0
12 Dec 2020
Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis
Michael A. Lepori
R. Thomas McCoy
22
23
0
24 Nov 2020
The Zero Resource Speech Benchmark 2021: Metrics and baselines for unsupervised spoken language modeling
Tu Nguyen
Maureen de Seyssel
Patricia Roze
M. Rivière
Evgeny Kharitonov
Alexei Baevski
Ewan Dunbar
Emmanuel Dupoux
SSL
24
101
0
23 Nov 2020
diagNNose: A Library for Neural Activation Analysis
Jaap Jumelet
AI4CE
25
9
0
13 Nov 2020
Analyzing Neural Discourse Coherence Models
Youmna Farag
Josef Valvoda
H. Yannakoudakis
Ted Briscoe
14
7
0
12 Nov 2020
CxGBERT: BERT meets Construction Grammar
Harish Tayyar Madabushi
Laurence Romain
Dagmar Divjak
P. Milin
19
40
0
09 Nov 2020
Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization
Tristan Thrush
Ethan Gotlieb Wilcox
R. Levy
19
14
0
04 Nov 2020
Influence Patterns for Explaining Information Flow in BERT
Kaiji Lu
Zifan Wang
Piotr (Peter) Mardziel
Anupam Datta
GNN
30
16
0
02 Nov 2020
Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora
Robert Frank
Jackson Petty
8
2
0
02 Nov 2020
Vec2Sent: Probing Sentence Embeddings with Natural Language Generation
M. Kerscher
Steffen Eger
22
1
0
01 Nov 2020
Word Frequency Does Not Predict Grammatical Knowledge in Language Models
Charles Yu
Ryan Sie
Nicolas Tedeschi
Leon Bergen
17
3
0
26 Oct 2020
Deep Clustering of Text Representations for Supervision-free Probing of Syntax
Vikram Gupta
Haoyue Shi
Kevin Gimpel
Mrinmaya Sachan
37
9
0
24 Oct 2020
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
282
172
0
24 Oct 2020
Language Models are Open Knowledge Graphs
Chenguang Wang
Xiao Liu
D. Song
SSL
KELM
26
135
0
22 Oct 2020
Detecting and Exorcising Statistical Demons from Language Models with Anti-Models of Negative Data
Michael L. Wick
Kate Silverstein
Jean-Baptiste Tristan
Adam Craig Pocock
Mark Johnson
25
3
0
22 Oct 2020
Explicitly Modeling Syntax in Language Models with Incremental Parsing and a Dynamic Oracle
Songlin Yang
Shawn Tan
Alessandro Sordoni
Siva Reddy
Rameswar Panda
19
5
0
21 Oct 2020
RNNs can generate bounded hierarchical languages with optimal memory
John Hewitt
Michael Hahn
Surya Ganguli
Percy Liang
Christopher D. Manning
LRM
16
51
0
15 Oct 2020
Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models
Ethan Gotlieb Wilcox
Peng Qian
Richard Futrell
Ryosuke Kohita
R. Levy
Miguel Ballesteros
NAI
15
10
0
12 Oct 2020
Can RNNs trained on harder subject-verb agreement instances still perform well on easier ones?
Hritik Bansal
Gantavya Bhatt
Sumeet Agarwal
26
0
0
10 Oct 2020
Discourse structure interacts with reference but not syntax in neural language models
Forrest Davis
Marten van Schijndel
8
20
0
10 Oct 2020
Recurrent babbling: evaluating the acquisition of grammar from limited input data
Ludovica Pannitto
Aurélie Herbelot
27
13
0
09 Oct 2020
BERTering RAMS: What and How Much does BERT Already Know About Event Arguments? -- A Study on the RAMS Dataset
Varun Gangal
Eduard H. Hovy
22
4
0
08 Oct 2020
Assessing Phrasal Representation and Composition in Transformers
Lang-Chi Yu
Allyson Ettinger
CoGe
22
67
0
08 Oct 2020
Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming
Kanishka Misra
Allyson Ettinger
Julia Taylor Rayz
14
56
0
06 Oct 2020
Intrinsic Probing through Dimension Selection
Lucas Torroba Hennigen
Adina Williams
Ryan Cotterell
28
57
0
06 Oct 2020
BERT Knows Punta Cana is not just beautiful, it's gorgeous: Ranking Scalar Adjectives with Contextualised Representations
Aina Garí Soler
Marianna Apidianaki
20
19
0
06 Oct 2020
How LSTM Encodes Syntax: Exploring Context Vectors and Semi-Quantization on Natural Text
Chihiro Shibata
Kei Uchiumi
D. Mochihashi
22
7
0
01 Oct 2020
What does it mean to be language-agnostic? Probing multilingual sentence encoders for typological properties
Rochelle Choenni
Ekaterina Shutova
25
37
0
27 Sep 2020
Multi-timescale Representation Learning in LSTM Language Models
Shivangi Mahto
Vy A. Vo
Javier S. Turek
Alexander G. Huth
15
29
0
27 Sep 2020
Improving Robustness and Generality of NLP Models Using Disentangled Representations
Jiawei Wu
Xiaoya Li
Xiang Ao
Yuxian Meng
Fei Wu
Jiwei Li
OOD
DRL
16
11
0
21 Sep 2020
An information theoretic view on selecting linguistic probes
Zining Zhu
Frank Rudzicz
28
19
0
15 Sep 2020
Analysis and Evaluation of Language Models for Word Sense Disambiguation
Daniel Loureiro
Kiamehr Rezaee
Mohammad Taher Pilehvar
Jose Camacho-Collados
33
13
0
26 Aug 2020
Compression of Deep Learning Models for Text: A Survey
Manish Gupta
Puneet Agrawal
VLM
MedIm
AI4CE
22
115
0
12 Aug 2020
Word meaning in minds and machines
Brenden M. Lake
G. Murphy
NAI
15
117
0
04 Aug 2020
Evaluating German Transformer Language Models with Syntactic Agreement Tests
Karolina Zaczynska
Nils Feldhus
Robert Schwarzenberg
Aleksandra Gabryszak
Sebastian Möller
14
6
0
07 Jul 2020
Learning Sparse Prototypes for Text Generation
Junxian He
Taylor Berg-Kirkpatrick
Graham Neubig
27
23
0
29 Jun 2020
Measuring Memorization Effect in Word-Level Neural Networks Probing
Rudolf Rosa
Tomáš Musil
David Marevcek
30
3
0
29 Jun 2020
Differentiable Window for Dynamic Local Attention
Thanh-Tung Nguyen
Xuan-Phi Nguyen
Chenyu You
Xiaoli Li
25
12
0
24 Jun 2020
Mechanisms for Handling Nested Dependencies in Neural-Network Language Models and Humans
Yair Lakretz
Dieuwke Hupkes
A. Vergallito
Marco Marelli
Marco Baroni
S. Dehaene
21
62
0
19 Jun 2020
Aligning Faithful Interpretations with their Social Attribution
Alon Jacovi
Yoav Goldberg
23
105
0
01 Jun 2020
Transferring Inductive Biases through Knowledge Distillation
Samira Abnar
Mostafa Dehghani
Willem H. Zuidema
33
57
0
31 May 2020
Syntactic Structure Distillation Pretraining For Bidirectional Encoders
A. Kuncoro
Lingpeng Kong
Daniel Fried
Dani Yogatama
Laura Rimell
Chris Dyer
Phil Blunsom
31
33
0
27 May 2020
Previous
1
2
3
...
10
5
6
7
8
9
Next