ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.01368
  4. Cited By
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies

Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies

4 November 2016
Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
ArXivPDFHTML

Papers citing "Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies"

50 / 496 papers shown
Title
Exposing Attention Glitches with Flip-Flop Language Modeling
Exposing Attention Glitches with Flip-Flop Language Modeling
Bingbin Liu
Jordan T. Ash
Surbhi Goel
A. Krishnamurthy
Cyril Zhang
LRM
35
46
0
01 Jun 2023
Empirical Sufficiency Lower Bounds for Language Modeling with
  Locally-Bootstrapped Semantic Structures
Empirical Sufficiency Lower Bounds for Language Modeling with Locally-Bootstrapped Semantic Structures
Jakob Prange
Emmanuele Chersoni
32
0
0
30 May 2023
Representation Of Lexical Stylistic Features In Language Models'
  Embedding Space
Representation Of Lexical Stylistic Features In Language Models' Embedding Space
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
39
6
0
29 May 2023
A Method for Studying Semantic Construal in Grammatical Constructions
  with Interpretable Contextual Embedding Spaces
A Method for Studying Semantic Construal in Grammatical Constructions with Interpretable Contextual Embedding Spaces
Gabriella Chronis
Kyle Mahowald
K. Erk
20
8
0
29 May 2023
Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics
  Interface of LMs Through Agentivity
Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics Interface of LMs Through Agentivity
Lindia Tjuatja
Emmy Liu
Lori S. Levin
Graham Neubig
38
2
0
29 May 2023
Modeling rapid language learning by distilling Bayesian priors into
  artificial neural networks
Modeling rapid language learning by distilling Bayesian priors into artificial neural networks
R. Thomas McCoy
Thomas Griffiths
BDL
42
14
0
24 May 2023
Testing Causal Models of Word Meaning in GPT-3 and -4
Testing Causal Models of Word Meaning in GPT-3 and -4
Sam Musker
Ellie Pavlick
18
1
0
24 May 2023
Assessing Linguistic Generalisation in Language Models: A Dataset for
  Brazilian Portuguese
Assessing Linguistic Generalisation in Language Models: A Dataset for Brazilian Portuguese
Rodrigo Wilkens
Leonardo Zilio
Aline Villavicencio
24
1
0
23 May 2023
Can LLMs facilitate interpretation of pre-trained language models?
Can LLMs facilitate interpretation of pre-trained language models?
Basel Mousi
Nadir Durrani
Fahim Dalvi
36
12
0
22 May 2023
Prompting is not a substitute for probability measurements in large
  language models
Prompting is not a substitute for probability measurements in large language models
Jennifer Hu
R. Levy
45
38
0
22 May 2023
Explaining How Transformers Use Context to Build Predictions
Explaining How Transformers Use Context to Build Predictions
Javier Ferrando
Gerard I. Gállego
Ioannis Tsiamas
Marta R. Costa-jussá
32
32
0
21 May 2023
Exploring How Generative Adversarial Networks Learn Phonological
  Representations
Exploring How Generative Adversarial Networks Learn Phonological Representations
Jing Chen
Micha Elsner
GAN
19
3
0
21 May 2023
Large Linguistic Models: Investigating LLMs' metalinguistic abilities
Large Linguistic Models: Investigating LLMs' metalinguistic abilities
Gašper Beguš
Maksymilian Dąbkowski
Ryan Rhodes
LRM
42
0
0
01 May 2023
SketchXAI: A First Look at Explainability for Human Sketches
SketchXAI: A First Look at Explainability for Human Sketches
Zhiyu Qu
Yulia Gryaditskaya
Ke Li
Kaiyue Pang
Tao Xiang
Yi-Zhe Song
34
8
0
23 Apr 2023
Expectations over Unspoken Alternatives Predict Pragmatic Inferences
Expectations over Unspoken Alternatives Predict Pragmatic Inferences
Jennifer Hu
R. Levy
Judith Degen
Sebastian Schuster
27
16
0
07 Apr 2023
Spelling convention sensitivity in neural language models
Spelling convention sensitivity in neural language models
Elizabeth Nielsen
Christo Kirov
Brian Roark
28
1
0
06 Mar 2023
NxPlain: Web-based Tool for Discovery of Latent Concepts
NxPlain: Web-based Tool for Discovery of Latent Concepts
Fahim Dalvi
Nadir Durrani
Hassan Sajjad
Tamim Jaban
Musab Husaini
Ummar Abbas
15
1
0
06 Mar 2023
Do Multi-Document Summarization Models Synthesize?
Do Multi-Document Summarization Models Synthesize?
Jay DeYoung
Stephanie C. Martinez
Iain J. Marshall
Byron C. Wallace
24
8
0
31 Jan 2023
A Discerning Several Thousand Judgments: GPT-3 Rates the Article +
  Adjective + Numeral + Noun Construction
A Discerning Several Thousand Judgments: GPT-3 Rates the Article + Adjective + Numeral + Noun Construction
Kyle Mahowald
22
24
0
29 Jan 2023
Tracing and Manipulating Intermediate Values in Neural Math Problem
  Solvers
Tracing and Manipulating Intermediate Values in Neural Math Problem Solvers
Yuta Matsumoto
Benjamin Heinzerling
Masashi Yoshikawa
Kentaro Inui
AIFin
33
5
0
17 Jan 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
31
209
0
16 Jan 2023
Counteracts: Testing Stereotypical Representation in Pre-trained
  Language Models
Counteracts: Testing Stereotypical Representation in Pre-trained Language Models
Damin Zhang
Julia Taylor Rayz
Romila Pradhan
44
2
0
11 Jan 2023
Pretraining Without Attention
Pretraining Without Attention
Junxiong Wang
J. Yan
Albert Gu
Alexander M. Rush
27
48
0
20 Dec 2022
Language model acceptability judgements are not always robust to context
Language model acceptability judgements are not always robust to context
Koustuv Sinha
Jon Gauthier
Aaron Mueller
Kanishka Misra
Keren Fuentes
R. Levy
Adina Williams
23
18
0
18 Dec 2022
A fine-grained comparison of pragmatic language understanding in humans
  and language models
A fine-grained comparison of pragmatic language understanding in humans and language models
Jennifer Hu
Sammy Floyd
Olessia Jouravlev
Evelina Fedorenko
E. Gibson
16
52
0
13 Dec 2022
Assessing the Capacity of Transformer to Abstract Syntactic
  Representations: A Contrastive Analysis Based on Long-distance Agreement
Assessing the Capacity of Transformer to Abstract Syntactic Representations: A Contrastive Analysis Based on Long-distance Agreement
Bingzhi Li
Guillaume Wisniewski
Benoît Crabbé
64
12
0
08 Dec 2022
Mutual Exclusivity Training and Primitive Augmentation to Induce
  Compositionality
Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality
Yichen Jiang
Xiang Zhou
Joey Tianyi Zhou
29
10
0
28 Nov 2022
Evaluation Beyond Task Performance: Analyzing Concepts in AlphaZero in
  Hex
Evaluation Beyond Task Performance: Analyzing Concepts in AlphaZero in Hex
Charles Lovering
Jessica Zosa Forde
George Konidaris
Ellie Pavlick
Michael L. Littman
21
7
0
26 Nov 2022
A Short Survey of Systematic Generalization
A Short Survey of Systematic Generalization
Yuanpeng Li
AI4CE
43
1
0
22 Nov 2022
Characterizing Intrinsic Compositionality in Transformers with Tree
  Projections
Characterizing Intrinsic Compositionality in Transformers with Tree Projections
Shikhar Murty
Pratyusha Sharma
Jacob Andreas
Christopher D. Manning
19
39
0
02 Nov 2022
Do LSTMs See Gender? Probing the Ability of LSTMs to Learn Abstract
  Syntactic Rules
Do LSTMs See Gender? Probing the Ability of LSTMs to Learn Abstract Syntactic Rules
Priyanka Sukumaran
Conor J. Houghton
N. Kazanina
19
4
0
31 Oct 2022
Probing for targeted syntactic knowledge through grammatical error
  detection
Probing for targeted syntactic knowledge through grammatical error detection
Christopher Davis
Christopher Bryant
Andrew Caines
Marek Rei
P. Buttery
22
3
0
28 Oct 2022
Causal Analysis of Syntactic Agreement Neurons in Multilingual Language
  Models
Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models
Aaron Mueller
Yudi Xia
Tal Linzen
MILM
41
9
0
25 Oct 2022
IELM: An Open Information Extraction Benchmark for Pre-Trained Language
  Models
IELM: An Open Information Extraction Benchmark for Pre-Trained Language Models
Chenguang Wang
Xiao Liu
Dawn Song
VLM
24
2
0
25 Oct 2022
Characterizing Verbatim Short-Term Memory in Neural Language Models
Characterizing Verbatim Short-Term Memory in Neural Language Models
K. Armeni
C. Honey
Tal Linzen
KELM
RALM
33
3
0
24 Oct 2022
Structural generalization is hard for sequence-to-sequence models
Structural generalization is hard for sequence-to-sequence models
Yuekun Yao
Alexander Koller
30
21
0
24 Oct 2022
On the Transformation of Latent Space in Fine-Tuned NLP Models
On the Transformation of Latent Space in Fine-Tuned NLP Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Firoj Alam
34
18
0
23 Oct 2022
SLING: Sino Linguistic Evaluation of Large Language Models
SLING: Sino Linguistic Evaluation of Large Language Models
Yixiao Song
Kalpesh Krishna
R. Bhatt
Mohit Iyyer
24
8
0
21 Oct 2022
Log-linear Guardedness and its Implications
Log-linear Guardedness and its Implications
Shauli Ravfogel
Yoav Goldberg
Ryan Cotterell
28
2
0
18 Oct 2022
Post-hoc analysis of Arabic transformer models
Post-hoc analysis of Arabic transformer models
Ahmed Abdelali
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
15
1
0
18 Oct 2022
Transparency Helps Reveal When Language Models Learn Meaning
Transparency Helps Reveal When Language Models Learn Meaning
Zhaofeng Wu
William Merrill
Hao Peng
Iz Beltagy
Noah A. Smith
21
9
0
14 Oct 2022
On the Explainability of Natural Language Processing Deep Models
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
31
82
0
13 Oct 2022
State-of-the-art generalisation research in NLP: A taxonomy and review
State-of-the-art generalisation research in NLP: A taxonomy and review
Dieuwke Hupkes
Mario Giulianelli
Verna Dankers
Mikel Artetxe
Yanai Elazar
...
Leila Khalatbari
Maria Ryskina
Rita Frieske
Ryan Cotterell
Zhijing Jin
129
95
0
06 Oct 2022
Are word boundaries useful for unsupervised language learning?
Are word boundaries useful for unsupervised language learning?
Tu Nguyen
Maureen de Seyssel
Robin Algayres
Patricia Roze
Ewan Dunbar
Emmanuel Dupoux
49
9
0
06 Oct 2022
"No, they did not": Dialogue response dynamics in pre-trained language
  models
"No, they did not": Dialogue response dynamics in pre-trained language models
Sanghee Kim
Lang-Chi Yu
Allyson Ettinger
21
1
0
05 Oct 2022
Probing of Quantitative Values in Abstractive Summarization Models
Probing of Quantitative Values in Abstractive Summarization Models
Nathan M. White
18
0
0
03 Oct 2022
ImmunoLingo: Linguistics-based formalization of the antibody language
ImmunoLingo: Linguistics-based formalization of the antibody language
Mai Ha Vu
Philippe A. Robert
Rahmad Akbar
B. Swiatczak
G. K. Sandve
Dag Trygve Tryslew Haug
Victor Greiff
AI4CE
28
8
0
26 Sep 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
117
109
0
22 Sep 2022
Representing Affect Information in Word Embeddings
Representing Affect Information in Word Embeddings
Yuhan Zhang
Wenqi Chen
Ruihan Zhang
Xiajie Zhang
CVBM
57
3
0
21 Sep 2022
Subject Verb Agreement Error Patterns in Meaningless Sentences: Humans
  vs. BERT
Subject Verb Agreement Error Patterns in Meaningless Sentences: Humans vs. BERT
Karim Lasri
Olga Seminck
Alessandro Lenci
Thierry Poibeau
29
4
0
21 Sep 2022
Previous
123456...8910
Next