ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.01368
  4. Cited By
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies

Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies

4 November 2016
Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
ArXivPDFHTML

Papers citing "Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies"

50 / 496 papers shown
Title
Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese
  Language Models
Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models
Yiwen Wang
Jennifer Hu
R. Levy
Peng Qian
26
3
0
22 Sep 2021
Does Vision-and-Language Pretraining Improve Lexical Grounding?
Does Vision-and-Language Pretraining Improve Lexical Grounding?
Tian Yun
Chen Sun
Ellie Pavlick
VLM
CoGe
40
32
0
21 Sep 2021
Are Transformers a Modern Version of ELIZA? Observations on French
  Object Verb Agreement
Are Transformers a Modern Version of ELIZA? Observations on French Object Verb Agreement
Bingzhi Li
Guillaume Wisniewski
Benoît Crabbé
63
6
0
21 Sep 2021
The Language Model Understood the Prompt was Ambiguous: Probing
  Syntactic Uncertainty Through Generation
The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation
Laura Aina
Tal Linzen
UQLM
21
18
0
16 Sep 2021
On the Limits of Minimal Pairs in Contrastive Evaluation
On the Limits of Minimal Pairs in Contrastive Evaluation
Jannis Vamvas
Rico Sennrich
52
16
0
15 Sep 2021
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?
Sagnik Ray Choudhury
Nikita Bhutani
Isabelle Augenstein
27
1
0
15 Sep 2021
Frequency Effects on Syntactic Rule Learning in Transformers
Frequency Effects on Syntactic Rule Learning in Transformers
Jason W. Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
88
63
0
14 Sep 2021
Examining Cross-lingual Contextual Embeddings with Orthogonal Structural
  Probes
Examining Cross-lingual Contextual Embeddings with Orthogonal Structural Probes
Tomasz Limisiewicz
David Marevcek
11
3
0
10 Sep 2021
Transformers in the loop: Polarity in neural models of language
Transformers in the loop: Polarity in neural models of language
Lisa Bylinina
Alexey Tikhonov
38
0
0
08 Sep 2021
How much pretraining data do language models need to learn syntax?
How much pretraining data do language models need to learn syntax?
Laura Pérez-Mayos
Miguel Ballesteros
Leo Wanner
14
32
0
07 Sep 2021
How Does Adversarial Fine-Tuning Benefit BERT?
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
26
4
0
31 Aug 2021
Neuron-level Interpretation of Deep NLP Models: A Survey
Neuron-level Interpretation of Deep NLP Models: A Survey
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILM
AI4CE
40
82
0
30 Aug 2021
Evaluating the Robustness of Neural Language Models to Input
  Perturbations
Evaluating the Robustness of Neural Language Models to Input Perturbations
M. Moradi
Matthias Samwald
AAML
53
96
0
27 Aug 2021
A Game Interface to Study Semantic Grounding in Text-Based Models
A Game Interface to Study Semantic Grounding in Text-Based Models
Timothee Mickus
Mathieu Constant
Denis Paperno
4
0
0
17 Aug 2021
Post-hoc Interpretability for Neural NLP: A Survey
Post-hoc Interpretability for Neural NLP: A Survey
Andreas Madsen
Siva Reddy
A. Chandar
XAI
27
225
0
10 Aug 2021
Towards Zero-shot Language Modeling
Towards Zero-shot Language Modeling
Edoardo Ponti
Ivan Vulić
Ryan Cotterell
Roi Reichart
Anna Korhonen
30
19
0
06 Aug 2021
The Benchmark Lottery
The Benchmark Lottery
Mostafa Dehghani
Yi Tay
A. Gritsenko
Zhe Zhao
N. Houlsby
Fernando Diaz
Donald Metzler
Oriol Vinyals
42
89
0
14 Jul 2021
What do End-to-End Speech Models Learn about Speaker, Language and
  Channel Information? A Layer-wise and Neuron-level Analysis
What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis
Shammur A. Chowdhury
Nadir Durrani
Ahmed M. Ali
49
12
0
01 Jul 2021
On the proper role of linguistically-oriented deep net analysis in
  linguistic theorizing
On the proper role of linguistically-oriented deep net analysis in linguistic theorizing
Marco Baroni
21
51
0
16 Jun 2021
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language
  Models
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
Matthew Finlayson
Aaron Mueller
Sebastian Gehrmann
Stuart M. Shieber
Tal Linzen
Yonatan Belinkov
43
105
0
10 Jun 2021
A Targeted Assessment of Incremental Processing in Neural LanguageModels
  and Humans
A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans
Ethan Gotlieb Wilcox
P. Vani
R. Levy
29
34
0
06 Jun 2021
Do Grammatical Error Correction Models Realize Grammatical
  Generalization?
Do Grammatical Error Correction Models Realize Grammatical Generalization?
Masato Mita
Hitomi Yanaka
18
13
0
06 Jun 2021
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Rowan Hall Maudslay
Ryan Cotterell
31
33
0
04 Jun 2021
The Limitations of Limited Context for Constituency Parsing
The Limitations of Limited Context for Constituency Parsing
Yuchen Li
Andrej Risteski
26
5
0
03 Jun 2021
Uncovering Constraint-Based Behavior in Neural Models via Targeted
  Fine-Tuning
Uncovering Constraint-Based Behavior in Neural Models via Targeted Fine-Tuning
Forrest Davis
Marten van Schijndel
AI4CE
17
7
0
02 Jun 2021
SyGNS: A Systematic Generalization Testbed Based on Natural Language
  Semantics
SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics
Hitomi Yanaka
K. Mineshima
Kentaro Inui
NAI
AI4CE
38
11
0
02 Jun 2021
John praised Mary because he? Implicit Causality Bias and Its
  Interaction with Explicit Cues in LMs
John praised Mary because he? Implicit Causality Bias and Its Interaction with Explicit Cues in LMs
Yova Kementchedjhieva
Mark Anderson
Anders Søgaard
36
13
0
02 Jun 2021
Using Integrated Gradients and Constituency Parse Trees to explain
  Linguistic Acceptability learnt by BERT
Using Integrated Gradients and Constituency Parse Trees to explain Linguistic Acceptability learnt by BERT
Anmol Nayak
Hariprasad Timmapathini
35
4
0
01 Jun 2021
Language Model Evaluation Beyond Perplexity
Language Model Evaluation Beyond Perplexity
Clara Meister
Ryan Cotterell
30
73
0
31 May 2021
How transfer learning impacts linguistic knowledge in deep NLP models?
How transfer learning impacts linguistic knowledge in deep NLP models?
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
13
49
0
31 May 2021
Effective Batching for Recurrent Neural Network Grammars
Effective Batching for Recurrent Neural Network Grammars
Hiroshi Noji
Yohei Oseki
GNN
21
16
0
31 May 2021
Language Models Use Monotonicity to Assess NPI Licensing
Language Models Use Monotonicity to Assess NPI Licensing
Jaap Jumelet
Milica Denić
Jakub Szymanik
Dieuwke Hupkes
Shane Steinert-Threlkeld
KELM
21
28
0
28 May 2021
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Hassan Sajjad
Narine Kokhlikyan
Fahim Dalvi
Nadir Durrani
MILM
33
8
0
17 May 2021
Counterfactual Interventions Reveal the Causal Effect of Relative Clause
  Representations on Agreement Prediction
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
Shauli Ravfogel
Grusha Prasad
Tal Linzen
Yoav Goldberg
36
57
0
14 May 2021
Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower
  Information Decay
Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay
H. Chien
Javier S. Turek
Nicole M. Beckage
Vy A. Vo
C. Honey
Ted Willke
22
15
0
12 May 2021
Assessing the Syntactic Capabilities of Transformer-based Multilingual
  Language Models
Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Laura Pérez-Mayos
Alba Táboas García
Simon Mille
Leo Wanner
ELM
LRM
24
8
0
10 May 2021
Let's Play Mono-Poly: BERT Can Reveal Words' Polysemy Level and
  Partitionability into Senses
Let's Play Mono-Poly: BERT Can Reveal Words' Polysemy Level and Partitionability into Senses
Aina Garí Soler
Marianna Apidianaki
MILM
211
68
0
29 Apr 2021
Morph Call: Probing Morphosyntactic Content of Multilingual Transformers
Morph Call: Probing Morphosyntactic Content of Multilingual Transformers
Vladislav Mikhailov
O. Serikov
Ekaterina Artemova
12
9
0
26 Apr 2021
Attention vs non-attention for a Shapley-based explanation method
Attention vs non-attention for a Shapley-based explanation method
T. Kersten
Hugh Mee Wong
Jaap Jumelet
Dieuwke Hupkes
33
4
0
26 Apr 2021
Sensitivity as a Complexity Measure for Sequence Classification Tasks
Sensitivity as a Complexity Measure for Sequence Classification Tasks
Michael Hahn
Dan Jurafsky
Richard Futrell
150
22
0
21 Apr 2021
Refining Targeted Syntactic Evaluation of Language Models
Refining Targeted Syntactic Evaluation of Language Models
Benjamin Newman
Kai-Siang Ang
Julia Gong
John Hewitt
29
43
0
19 Apr 2021
Masked Language Modeling and the Distributional Hypothesis: Order Word
  Matters Pre-training for Little
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
Koustuv Sinha
Robin Jia
Dieuwke Hupkes
J. Pineau
Adina Williams
Douwe Kiela
45
245
0
14 Apr 2021
Better Neural Machine Translation by Extracting Linguistic Information
  from BERT
Better Neural Machine Translation by Extracting Linguistic Information from BERT
Hassan S. Shavarani
Anoop Sarkar
24
15
0
07 Apr 2021
MTLHealth: A Deep Learning System for Detecting Disturbing Content in
  Student Essays
MTLHealth: A Deep Learning System for Detecting Disturbing Content in Student Essays
Joseph Valencia
Erin Yao
11
0
0
07 Mar 2021
Translating the Unseen? Yoruba-English MT in Low-Resource,
  Morphologically-Unmarked Settings
Translating the Unseen? Yoruba-English MT in Low-Resource, Morphologically-Unmarked Settings
Ife Adebara
Muhammad Abdul-Mageed
Miikka Silfverberg
14
6
0
07 Mar 2021
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
Vassilina Nikoulina
Maxat Tezekbayev
Nuradil Kozhakhmet
Madina Babazhanova
Matthias Gallé
Z. Assylbekov
34
8
0
02 Mar 2021
Vyākarana: A Colorless Green Benchmark for Syntactic Evaluation in
  Indic Languages
Vyākarana: A Colorless Green Benchmark for Syntactic Evaluation in Indic Languages
Rajaswa Patil
Jasleen Dhillon
Siddhant Mahurkar
Saumitra Kulkarni
M. Malhotra
V. Baths
23
1
0
01 Mar 2021
Beyond Fully-Connected Layers with Quaternions: Parameterization of
  Hypercomplex Multiplications with $1/n$ Parameters
Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n1/n1/n Parameters
Aston Zhang
Yi Tay
Shuai Zhang
Alvin Chan
A. Luu
S. Hui
Jie Fu
MQ
182
83
0
17 Feb 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
272
347
0
01 Feb 2021
Explaining Natural Language Processing Classifiers with Occlusion and
  Language Modeling
Explaining Natural Language Processing Classifiers with Occlusion and Language Modeling
David Harbecke
AAML
27
2
0
28 Jan 2021
Previous
123456...8910
Next