ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.09091
  4. Cited By
Revisiting the poverty of the stimulus: hierarchical generalization
  without a hierarchical bias in recurrent neural networks

Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks

25 February 2018
R. Thomas McCoy
Robert Frank
Tal Linzen
ArXivPDFHTML

Papers citing "Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks"

47 / 47 papers shown
Title
Generative Linguistics, Large Language Models, and the Social Nature of Scientific Success
Generative Linguistics, Large Language Models, and the Social Nature of Scientific Success
Sophie Hao
ELM
AI4CE
54
0
0
25 Mar 2025
Re-evaluating Theory of Mind evaluation in large language models
Re-evaluating Theory of Mind evaluation in large language models
Jennifer Hu
Felix Sosa
T. Ullman
42
0
0
28 Feb 2025
Can Input Attributions Interpret the Inductive Reasoning Process Elicited in In-Context Learning?
Can Input Attributions Interpret the Inductive Reasoning Process Elicited in In-Context Learning?
Mengyu Ye
Tatsuki Kuribayashi
Goro Kobayashi
Jun Suzuki
LRM
92
0
0
20 Dec 2024
Learning Syntax Without Planting Trees: Understanding Hierarchical Generalization in Transformers
Learning Syntax Without Planting Trees: Understanding Hierarchical Generalization in Transformers
Kabir Ahuja
Vidhisha Balachandran
Madhur Panwar
Tianxing He
Noah A. Smith
Navin Goyal
Yulia Tsvetkov
39
8
0
25 Apr 2024
Grammatical information in BERT sentence embeddings as two-dimensional
  arrays
Grammatical information in BERT sentence embeddings as two-dimensional arrays
Vivi Nastase
Paola Merlo
17
6
0
15 Dec 2023
In-context Learning Generalizes, But Not Always Robustly: The Case of
  Syntax
In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax
Aaron Mueller
Albert Webson
Jackson Petty
Tal Linzen
ReLM
LRM
32
13
0
13 Nov 2023
Uncovering Intermediate Variables in Transformers using Circuit Probing
Uncovering Intermediate Variables in Transformers using Circuit Probing
Michael A. Lepori
Thomas Serre
Ellie Pavlick
75
7
0
07 Nov 2023
Language Models Can Learn Exceptions to Syntactic Rules
Language Models Can Learn Exceptions to Syntactic Rules
Cara Su-Yi
Tal Linzen
6
6
0
09 Jun 2023
Second Language Acquisition of Neural Language Models
Second Language Acquisition of Neural Language Models
Miyu Oba
Tatsuki Kuribayashi
Hiroki Ouchi
Taro Watanabe
13
5
0
05 Jun 2023
How to Plant Trees in Language Models: Data and Architectural Effects on
  the Emergence of Syntactic Inductive Biases
How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases
Aaron Mueller
Tal Linzen
AI4CE
16
20
0
31 May 2023
Measuring Inductive Biases of In-Context Learning with Underspecified
  Demonstrations
Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations
Chenglei Si
Dan Friedman
Nitish Joshi
Shi Feng
Danqi Chen
He He
13
42
0
22 May 2023
Language Model Behavior: A Comprehensive Survey
Language Model Behavior: A Comprehensive Survey
Tyler A. Chang
Benjamin Bergen
VLM
LRM
LM&MA
27
103
0
20 Mar 2023
Does Vision Accelerate Hierarchical Generalization of Neural Language
  Learners?
Does Vision Accelerate Hierarchical Generalization of Neural Language Learners?
Tatsuki Kuribayashi
VLM
16
0
0
01 Feb 2023
How poor is the stimulus? Evaluating hierarchical generalization in
  neural networks trained on child-directed speech
How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech
Aditya Yedetore
Tal Linzen
Robert Frank
R. Thomas McCoy
34
16
0
26 Jan 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
25
209
0
16 Jan 2023
What Artificial Neural Networks Can Tell Us About Human Language
  Acquisition
What Artificial Neural Networks Can Tell Us About Human Language Acquisition
Alex Warstadt
Samuel R. Bowman
19
111
0
17 Aug 2022
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive
  Bias to Sequence-to-sequence Models
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Aaron Mueller
Robert Frank
Tal Linzen
Luheng Wang
Sebastian Schuster
AIMat
19
33
0
17 Mar 2022
Data-driven Model Generalizability in Crosslinguistic Low-resource
  Morphological Segmentation
Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation
Zoey Liu
Emily Tucker Prudhommeaux
32
4
0
05 Jan 2022
Uncovering Constraint-Based Behavior in Neural Models via Targeted
  Fine-Tuning
Uncovering Constraint-Based Behavior in Neural Models via Targeted Fine-Tuning
Forrest Davis
Marten van Schijndel
AI4CE
13
7
0
02 Jun 2021
SyGNS: A Systematic Generalization Testbed Based on Natural Language
  Semantics
SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics
Hitomi Yanaka
K. Mineshima
Kentaro Inui
NAI
AI4CE
30
11
0
02 Jun 2021
Examining the Inductive Bias of Neural Language Models with Artificial
  Languages
Examining the Inductive Bias of Neural Language Models with Artificial Languages
Jennifer C. White
Ryan Cotterell
17
43
0
02 Jun 2021
Learning Which Features Matter: RoBERTa Acquires a Preference for
  Linguistic Generalizations (Eventually)
Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Alex Warstadt
Yian Zhang
Haau-Sing Li
Haokun Liu
Samuel R. Bowman
SSL
AI4CE
37
21
0
11 Oct 2020
Can RNNs trained on harder subject-verb agreement instances still
  perform well on easier ones?
Can RNNs trained on harder subject-verb agreement instances still perform well on easier ones?
Hritik Bansal
Gantavya Bhatt
Sumeet Agarwal
14
0
0
10 Oct 2020
Recurrent babbling: evaluating the acquisition of grammar from limited
  input data
Recurrent babbling: evaluating the acquisition of grammar from limited input data
Ludovica Pannitto
Aurélie Herbelot
10
13
0
09 Oct 2020
Can neural networks acquire a structural bias from raw linguistic data?
Can neural networks acquire a structural bias from raw linguistic data?
Alex Warstadt
Samuel R. Bowman
AI4CE
20
53
0
14 Jul 2020
Emergence of Syntax Needs Minimal Supervision
Emergence of Syntax Needs Minimal Supervision
Raphaël Bailly
Kata Gábor
26
5
0
03 May 2020
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
Aaron Mueller
Garrett Nicolai
Panayiota Petrou-Zeniou
N. Talmina
Tal Linzen
14
54
0
01 May 2020
Recurrent Neural Network Language Models Always Learn English-Like
  Relative Clause Attachment
Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment
Forrest Davis
Marten van Schijndel
13
23
0
01 May 2020
Syntactic Structure from Deep Learning
Syntactic Structure from Deep Learning
Tal Linzen
Marco Baroni
NAI
10
178
0
22 Apr 2020
Overestimation of Syntactic Representationin Neural Language Models
Overestimation of Syntactic Representationin Neural Language Models
Jordan Kodner
Nitish Gupta
13
12
0
10 Apr 2020
Does syntax need to grow on trees? Sources of hierarchical inductive
  bias in sequence-to-sequence networks
Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks
R. Thomas McCoy
Robert Frank
Tal Linzen
12
106
0
10 Jan 2020
BERTs of a feather do not generalize together: Large variability in
  generalization across models with similar test set performance
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance
R. Thomas McCoy
Junghyun Min
Tal Linzen
16
147
0
07 Nov 2019
Using Priming to Uncover the Organization of Syntactic Representations
  in Neural Language Models
Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
Grusha Prasad
Marten van Schijndel
Tal Linzen
24
51
0
23 Sep 2019
Does BERT agree? Evaluating knowledge of structure dependence through
  agreement relations
Does BERT agree? Evaluating knowledge of structure dependence through agreement relations
Geoff Bacon
T. Regier
16
21
0
26 Aug 2019
Tabula nearly rasa: Probing the Linguistic Knowledge of Character-Level
  Neural Language Models Trained on Unsegmented Text
Tabula nearly rasa: Probing the Linguistic Knowledge of Character-Level Neural Language Models Trained on Unsegmented Text
Michael Hahn
Marco Baroni
LMTD
20
15
0
17 Jun 2019
Hierarchical Representation in Neural Language Models: Suppression and
  Recovery of Expectations
Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations
Ethan Gotlieb Wilcox
R. Levy
Richard Futrell
MILM
11
30
0
10 Jun 2019
Open Sesame: Getting Inside BERT's Linguistic Knowledge
Open Sesame: Getting Inside BERT's Linguistic Knowledge
Yongjie Lin
Y. Tan
Robert Frank
10
296
0
04 Jun 2019
What Syntactic Structures block Dependencies in RNN Language Models?
What Syntactic Structures block Dependencies in RNN Language Models?
Ethan Gotlieb Wilcox
R. Levy
Richard Futrell
9
24
0
24 May 2019
CNNs found to jump around more skillfully than RNNs: Compositional
  generalization in seq2seq convolutional networks
CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks
Roberto Dessì
Marco Baroni
UQCV
12
43
0
21 May 2019
Studying the Inductive Biases of RNNs with Synthetic Variations of
  Natural Languages
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages
Shauli Ravfogel
Yoav Goldberg
Tal Linzen
17
70
0
15 Mar 2019
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural
  Language Inference
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference
R. Thomas McCoy
Ellie Pavlick
Tal Linzen
14
1,212
0
04 Feb 2019
Human few-shot learning of compositional instructions
Human few-shot learning of compositional instructions
Brenden Lake
Tal Linzen
Marco Baroni
11
111
0
14 Jan 2019
Analysis Methods in Neural Language Processing: A Survey
Analysis Methods in Neural Language Processing: A Survey
Yonatan Belinkov
James R. Glass
25
547
0
21 Dec 2018
Do RNNs learn human-like abstract word order preferences?
Do RNNs learn human-like abstract word order preferences?
Ainesh Bakshi
R. Levy
9
27
0
05 Nov 2018
What can linguistics and deep learning contribute to each other?
What can linguistics and deep learning contribute to each other?
Tal Linzen
13
40
0
11 Sep 2018
What do RNN Language Models Learn about Filler-Gap Dependencies?
What do RNN Language Models Learn about Filler-Gap Dependencies?
Ethan Gotlieb Wilcox
R. Levy
Takashi Morita
Richard Futrell
LRM
14
165
0
31 Aug 2018
The Fine Line between Linguistic Generalization and Failure in
  Seq2Seq-Attention Models
The Fine Line between Linguistic Generalization and Failure in Seq2Seq-Attention Models
Noah Weber
L. Shekhar
Niranjan Balasubramanian
95
30
0
03 May 2018
1