ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.03260
  4. Cited By
Neural Language Models as Psycholinguistic Subjects: Representations of
  Syntactic State

Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State

8 March 2019
Richard Futrell
Ethan Gotlieb Wilcox
Takashi Morita
Peng Qian
Miguel Ballesteros
R. Levy
    MILM
ArXivPDFHTML

Papers citing "Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State"

50 / 114 papers shown
Title
Expectations over Unspoken Alternatives Predict Pragmatic Inferences
Expectations over Unspoken Alternatives Predict Pragmatic Inferences
Jennifer Hu
R. Levy
Judith Degen
Sebastian Schuster
21
15
0
07 Apr 2023
A Discerning Several Thousand Judgments: GPT-3 Rates the Article +
  Adjective + Numeral + Noun Construction
A Discerning Several Thousand Judgments: GPT-3 Rates the Article + Adjective + Numeral + Noun Construction
Kyle Mahowald
22
24
0
29 Jan 2023
A fine-grained comparison of pragmatic language understanding in humans
  and language models
A fine-grained comparison of pragmatic language understanding in humans and language models
Jennifer Hu
Sammy Floyd
Olessia Jouravlev
Evelina Fedorenko
E. Gibson
8
52
0
13 Dec 2022
Probing for Incremental Parse States in Autoregressive Language Models
Probing for Incremental Parse States in Autoregressive Language Models
Tiwalayo Eisape
Vineet Gangireddy
R. Levy
Yoon Kim
25
11
0
17 Nov 2022
Collateral facilitation in humans and language models
Collateral facilitation in humans and language models
J. Michaelov
Benjamin Bergen
19
11
0
09 Nov 2022
Can language models handle recursively nested grammatical structures? A
  case study on comparing models and humans
Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
Andrew Kyle Lampinen
ReLM
ELM
27
35
0
27 Oct 2022
Characterizing Verbatim Short-Term Memory in Neural Language Models
Characterizing Verbatim Short-Term Memory in Neural Language Models
K. Armeni
C. Honey
Tal Linzen
KELM
RALM
27
3
0
24 Oct 2022
Predicting Fine-Tuning Performance with Probing
Predicting Fine-Tuning Performance with Probing
Zining Zhu
Soroosh Shahtalebi
Frank Rudzicz
30
9
0
13 Oct 2022
COMPS: Conceptual Minimal Pair Sentences for testing Robust Property
  Knowledge and its Inheritance in Pre-trained Language Models
COMPS: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models
Kanishka Misra
Julia Taylor Rayz
Allyson Ettinger
33
10
0
05 Oct 2022
Garden-Path Traversal in GPT-2
Garden-Path Traversal in GPT-2
William Jurayj
William Rudman
Carsten Eickhoff
19
4
0
24 May 2022
The Curious Case of Control
The Curious Case of Control
Elias Stengel-Eskin
Benjamin Van Durme
22
0
0
24 May 2022
Is the Computation of Abstract Sameness Relations Human-Like in Neural
  Language Models?
Is the Computation of Abstract Sameness Relations Human-Like in Neural Language Models?
Lukas Thoma
Benjamin Roth
21
0
0
12 May 2022
When a sentence does not introduce a discourse entity, Transformer-based
  models still sometimes refer to it
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it
Sebastian Schuster
Tal Linzen
13
25
0
06 May 2022
minicons: Enabling Flexible Behavioral and Representational Analyses of
  Transformer Language Models
minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models
Kanishka Misra
19
58
0
24 Mar 2022
Transformer Grammars: Augmenting Transformer Language Models with
  Syntactic Inductive Biases at Scale
Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale
Laurent Sartran
Samuel Barrett
A. Kuncoro
Milovs Stanojević
Phil Blunsom
Chris Dyer
47
49
0
01 Mar 2022
How Does Data Corruption Affect Natural Language Understanding Models? A
  Study on GLUE datasets
How Does Data Corruption Affect Natural Language Understanding Models? A Study on GLUE datasets
Aarne Talman
Marianna Apidianaki
S. Chatzikyriakidis
Jörg Tiedemann
ELM
27
0
0
12 Jan 2022
Data-driven Model Generalizability in Crosslinguistic Low-resource
  Morphological Segmentation
Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation
Zoey Liu
Emily Tucker Prudhommeaux
43
4
0
05 Jan 2022
Variation and generality in encoding of syntactic anomaly information in
  sentence embeddings
Variation and generality in encoding of syntactic anomaly information in sentence embeddings
Qinxuan Wu
Allyson Ettinger
23
2
0
12 Nov 2021
Schrödinger's Tree -- On Syntax and Neural Language Models
Schrödinger's Tree -- On Syntax and Neural Language Models
Artur Kulmizev
Joakim Nivre
35
6
0
17 Oct 2021
Word Acquisition in Neural Language Models
Word Acquisition in Neural Language Models
Tyler A. Chang
Benjamin Bergen
29
39
0
05 Oct 2021
Structural Persistence in Language Models: Priming as a Window into
  Abstract Language Representations
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
Arabella J. Sinclair
Jaap Jumelet
Willem H. Zuidema
Raquel Fernández
61
38
0
30 Sep 2021
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with
  Controllable Perturbations
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations
Ekaterina Taktasheva
Vladislav Mikhailov
Ekaterina Artemova
24
13
0
28 Sep 2021
Sorting through the noise: Testing robustness of information processing
  in pre-trained language models
Sorting through the noise: Testing robustness of information processing in pre-trained language models
Lalchand Pandia
Allyson Ettinger
44
37
0
25 Sep 2021
Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese
  Language Models
Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models
Yiwen Wang
Jennifer Hu
R. Levy
Peng Qian
9
3
0
22 Sep 2021
The Language Model Understood the Prompt was Ambiguous: Probing
  Syntactic Uncertainty Through Generation
The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation
Laura Aina
Tal Linzen
UQLM
13
17
0
16 Sep 2021
Connecting degree and polarity: An artificial language learning study
Connecting degree and polarity: An artificial language learning study
Lisa Bylinina
Alexey Tikhonov
Ekaterina Garmash
AI4CE
9
0
0
13 Sep 2021
Transformers in the loop: Polarity in neural models of language
Transformers in the loop: Polarity in neural models of language
Lisa Bylinina
Alexey Tikhonov
30
0
0
08 Sep 2021
How much pretraining data do language models need to learn syntax?
How much pretraining data do language models need to learn syntax?
Laura Pérez-Mayos
Miguel Ballesteros
Leo Wanner
14
32
0
07 Sep 2021
So Cloze yet so Far: N400 Amplitude is Better Predicted by
  Distributional Information than Human Predictability Judgements
So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements
J. Michaelov
S. Coulson
Benjamin Bergen
16
41
0
02 Sep 2021
Structural Guidance for Transformer Language Models
Structural Guidance for Transformer Language Models
Peng Qian
Tahira Naseem
R. Levy
Ramón Fernández Astudillo
39
31
0
30 Jul 2021
On the proper role of linguistically-oriented deep net analysis in
  linguistic theorizing
On the proper role of linguistically-oriented deep net analysis in linguistic theorizing
Marco Baroni
13
51
0
16 Jun 2021
Model Explainability in Deep Learning Based Natural Language Processing
Model Explainability in Deep Learning Based Natural Language Processing
Shafie Gholizadeh
Nengfeng Zhou
23
19
0
14 Jun 2021
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language
  Models
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
Matthew Finlayson
Aaron Mueller
Sebastian Gehrmann
Stuart M. Shieber
Tal Linzen
Yonatan Belinkov
40
101
0
10 Jun 2021
A Targeted Assessment of Incremental Processing in Neural LanguageModels
  and Humans
A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans
Ethan Gotlieb Wilcox
P. Vani
R. Levy
21
33
0
06 Jun 2021
Language Models Use Monotonicity to Assess NPI Licensing
Language Models Use Monotonicity to Assess NPI Licensing
Jaap Jumelet
Milica Denić
Jakub Szymanik
Dieuwke Hupkes
Shane Steinert-Threlkeld
KELM
18
28
0
28 May 2021
The Low-Dimensional Linear Geometry of Contextualized Word
  Representations
The Low-Dimensional Linear Geometry of Contextualized Word Representations
Evan Hernandez
Jacob Andreas
MILM
20
40
0
15 May 2021
Assessing the Syntactic Capabilities of Transformer-based Multilingual
  Language Models
Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Laura Pérez-Mayos
Alba Táboas García
Simon Mille
Leo Wanner
ELM
LRM
16
8
0
10 May 2021
Sensitivity as a Complexity Measure for Sequence Classification Tasks
Sensitivity as a Complexity Measure for Sequence Classification Tasks
Michael Hahn
Dan Jurafsky
Richard Futrell
150
22
0
21 Apr 2021
Refining Targeted Syntactic Evaluation of Language Models
Refining Targeted Syntactic Evaluation of Language Models
Benjamin Newman
Kai-Siang Ang
Julia Gong
John Hewitt
29
43
0
19 Apr 2021
Syntactic Perturbations Reveal Representational Correlates of
  Hierarchical Phrase Structure in Pretrained Language Models
Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models
Matteo Alleman
J. Mamou
Miguel Rio
Hanlin Tang
Yoon Kim
SueYeon Chung
NAI
30
17
0
15 Apr 2021
Vyākarana: A Colorless Green Benchmark for Syntactic Evaluation in
  Indic Languages
Vyākarana: A Colorless Green Benchmark for Syntactic Evaluation in Indic Languages
Rajaswa Patil
Jasleen Dhillon
Siddhant Mahurkar
Saumitra Kulkarni
M. Malhotra
V. Baths
15
1
0
01 Mar 2021
Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT
Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT
Isabel Papadimitriou
Ethan A. Chi
Richard Futrell
Kyle Mahowald
27
44
0
26 Jan 2021
Evaluating Models of Robust Word Recognition with Serial Reproduction
Evaluating Models of Robust Word Recognition with Serial Reproduction
Stephan C. Meylan
Sathvik Nair
Thomas L. Griffiths
14
4
0
24 Jan 2021
Recoding latent sentence representations -- Dynamic gradient-based
  activation modification in RNNs
Recoding latent sentence representations -- Dynamic gradient-based activation modification in RNNs
Dennis Ulmer
30
0
0
03 Jan 2021
Investigating Novel Verb Learning in BERT: Selectional Preference
  Classes and Alternation-Based Syntactic Generalization
Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization
Tristan Thrush
Ethan Gotlieb Wilcox
R. Levy
11
14
0
04 Nov 2020
Word Frequency Does Not Predict Grammatical Knowledge in Language Models
Word Frequency Does Not Predict Grammatical Knowledge in Language Models
Charles Yu
Ryan Sie
Nicolas Tedeschi
Leon Bergen
9
3
0
26 Oct 2020
Learning to Recognize Dialect Features
Learning to Recognize Dialect Features
Dorottya Demszky
D. Sharma
J. Clark
Vinodkumar Prabhakaran
Jacob Eisenstein
114
38
0
23 Oct 2020
How well does surprisal explain N400 amplitude under different
  experimental conditions?
How well does surprisal explain N400 amplitude under different experimental conditions?
J. Michaelov
Benjamin Bergen
7
40
0
09 Oct 2020
Learning Context-Free Languages with Nondeterministic Stack RNNs
Learning Context-Free Languages with Nondeterministic Stack RNNs
Brian DuSell
David Chiang
11
13
0
09 Oct 2020
Assessing Phrasal Representation and Composition in Transformers
Assessing Phrasal Representation and Composition in Transformers
Lang-Chi Yu
Allyson Ettinger
CoGe
22
67
0
08 Oct 2020
Previous
123
Next