Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.12187
Cited By
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
21 October 2022
Suhas Arehalli
Brian Dillon
Tal Linzen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities"
18 / 18 papers shown
Title
Signatures of human-like processing in Transformer forward passes
Jennifer Hu
Michael A. Lepori
Michael Franke
AI4CE
156
0
0
18 Apr 2025
Towards a Similarity-adjusted Surprisal Theory
Clara Meister
Mario Giulianelli
Tiago Pimentel
32
3
0
23 Oct 2024
A Psycholinguistic Evaluation of Language Models' Sensitivity to Argument Roles
Eun-Kyoung Rosa Lee
Sathvik Nair
Naomi Feldman
60
4
0
21 Oct 2024
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
Abhinav Patil
Jaap Jumelet
Yu Ying Chiu
Andy Lapastora
Peter Shen
Lexie Wang
Clevis Willrich
Shane Steinert-Threlkeld
32
13
0
24 May 2024
From Form(s) to Meaning: Probing the Semantic Depths of Language Models Using Multisense Consistency
Xenia Ohmer
Elia Bruni
Dieuwke Hupkes
AI4CE
31
6
0
18 Apr 2024
Computational Sentence-level Metrics Predicting Human Sentence Comprehension
Kun Sun
Rong Wang
46
0
0
23 Mar 2024
Predictions from language models for multiple-choice tasks are not robust under variation of scoring methods
Polina Tsvilodub
Hening Wang
Sharon Grosch
Michael Franke
35
8
0
01 Mar 2024
When Only Time Will Tell: Interpreting How Transformers Process Local Ambiguities Through the Lens of Restart-Incrementality
Brielen Madureira
Patrick Kahardipraja
David Schlangen
39
2
0
20 Feb 2024
Describing Images
Fast
and
Slow
\textit{Fast and Slow}
Fast and Slow
: Quantifying and Predicting the Variation in Human Signals during Visuo-Linguistic Processes
Ece Takmaz
Sandro Pezzelle
Raquel Fernández
24
1
0
02 Feb 2024
Large GPT-like Models are Bad Babies: A Closer Look at the Relationship between Linguistic Competence and Psycholinguistic Measures
Julius Steuer
Marius Mosbach
Dietrich Klakow
22
10
0
08 Nov 2023
Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics
Yuhan Zhang
Edward Gibson
Forrest Davis
35
6
0
02 Nov 2023
A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing
William Timkey
Tal Linzen
11
15
0
24 Oct 2023
When Language Models Fall in Love: Animacy Processing in Transformer Language Models
Michael Hanna
Yonatan Belinkov
Sandro Pezzelle
22
11
0
23 Oct 2023
Information Value: Measuring Utterance Predictability as Distance from Plausible Alternatives
Mario Giulianelli
Sarenne Wallbridge
Raquel Fernández
25
13
0
20 Oct 2023
Structural Ambiguity and its Disambiguation in Language Model Based Parsers: the Case of Dutch Clause Relativization
G. Wijnholds
M. Moortgat
18
3
0
24 May 2023
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
29
209
0
16 Jan 2023
Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
Byung-Doh Oh
William Schuler
19
101
0
23 Dec 2022
Testing the limits of natural language models for predicting human language judgments
Tal Golan
Matthew Siegelman
N. Kriegeskorte
Christopher A. Baldassano
22
15
0
07 Apr 2022
1