Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2212.12131
Cited By
Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
23 December 2022
Byung-Doh Oh
William Schuler
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?"
50 / 50 papers shown
Title
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Klemen Kotar
Greta Tuckute
49
0
0
29 Apr 2025
Signatures of human-like processing in Transformer forward passes
Jennifer Hu
Michael A. Lepori
Michael Franke
AI4CE
156
0
0
18 Apr 2025
Generative Linguistics, Large Language Models, and the Social Nature of Scientific Success
Sophie Hao
ELM
AI4CE
54
0
0
25 Mar 2025
Strategic resource allocation in memory encoding: An efficiency principle shaping language processing
Weijie Xu
Richard Futrell
55
1
0
18 Mar 2025
From Language to Cognition: How LLMs Outgrow the Human Language Network
Badr AlKhamissi
Greta Tuckute
Yingtian Tang
Taha Binhuraib
Antoine Bosselut
Martin Schrimpf
ALM
154
1
0
03 Mar 2025
Language Models Grow Less Humanlike beyond Phase Transition
Tatsuya Aoyama
Ethan Wilcox
46
1
0
26 Feb 2025
Anything Goes? A Crosslinguistic Study of (Im)possible Language Learning in LMs
Xiulin Yang
Tatsuya Aoyama
Yuekun Yao
Ethan Wilcox
50
1
0
26 Feb 2025
Eye Tracking Based Cognitive Evaluation of Automatic Readability Assessment Measures
Keren Gruteke Klein
Shachar Frenkel
Omer Shubi
Yevgeni Berzak
43
0
0
16 Feb 2025
Large Language Models Are Human-Like Internally
Tatsuki Kuribayashi
Yohei Oseki
Souhaib Ben Taieb
Kentaro Inui
Timothy Baldwin
71
4
0
03 Feb 2025
Learning to Write Rationally: How Information Is Distributed in Non-Native Speakers' Essays
Zixin Tang
Janet G. van Hell
29
0
0
05 Nov 2024
What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length
Lindia Tjuatja
Graham Neubig
Tal Linzen
Sophie Hao
44
1
0
04 Nov 2024
Towards a Similarity-adjusted Surprisal Theory
Clara Meister
Mario Giulianelli
Tiago Pimentel
32
3
0
23 Oct 2024
A Psycholinguistic Evaluation of Language Models' Sensitivity to Argument Roles
Eun-Kyoung Rosa Lee
Sathvik Nair
Naomi Feldman
62
4
0
21 Oct 2024
Reverse-Engineering the Reader
Samuel Kiegeland
Ethan Gotlieb Wilcox
Afra Amini
David Robert Reich
Ryan Cotterell
23
0
0
16 Oct 2024
Large-scale cloze evaluation reveals that token prediction tasks are neither lexically nor semantically aligned
Cassandra L. Jacobs
Loïc Grobol
Alvin Tsang
21
0
0
15 Oct 2024
The Roles of Contextual Semantic Relevance Metrics in Human Visual Processing
Kun Sun
Rong Wang
VLM
30
0
0
13 Oct 2024
Linear Recency Bias During Training Improves Transformers' Fit to Reading Times
Christian Clark
Byung-Doh Oh
William Schuler
33
3
0
17 Sep 2024
Thesis proposal: Are We Losing Textual Diversity to Natural Language Processing?
Josef Jon
31
0
0
15 Sep 2024
On the Role of Context in Reading Time Prediction
Andreas Opedal
Eleanor Chodroff
Ryan Cotterell
Ethan Gotlieb Wilcox
33
7
0
12 Sep 2024
Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network
Badr AlKhamissi
Greta Tuckute
Antoine Bosselut
Martin Schrimpf
76
6
0
21 Jun 2024
How to Compute the Probability of a Word
Tiago Pimentel
Clara Meister
28
14
0
20 Jun 2024
SCAR: Efficient Instruction-Tuning for Large Language Models via Style Consistency-Aware Response Ranking
Zhuang Li
Yuncheng Hua
Thuy-Trang Vu
Haolan Zhan
Lizhen Qu
Gholamreza Haffari
54
2
0
16 Jun 2024
Leading Whitespaces of Language Models' Subword Vocabulary Poses a Confound for Calculating Word Probabilities
Byung-Doh Oh
William Schuler
27
14
0
16 Jun 2024
Language models emulate certain cognitive profiles: An investigation of how predictability measures interact with individual differences
Patrick Haller
Lena S. Bolliger
Lena Ann Jäger
39
1
0
07 Jun 2024
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
Abhinav Patil
Jaap Jumelet
Yu Ying Chiu
Andy Lapastora
Peter Shen
Lexie Wang
Clevis Willrich
Shane Steinert-Threlkeld
35
13
0
24 May 2024
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics
J. Michaelov
Catherine Arnett
Benjamin Bergen
34
3
0
30 Apr 2024
Computational Sentence-level Metrics Predicting Human Sentence Comprehension
Kun Sun
Rong Wang
46
0
0
23 Mar 2024
Emergent Word Order Universals from Cognitively-Motivated Language Models
Tatsuki Kuribayashi
Ryo Ueda
Ryosuke Yoshida
Yohei Oseki
Ted Briscoe
Timothy Baldwin
38
2
0
19 Feb 2024
Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times
Byung-Doh Oh
Shisen Yue
William Schuler
53
14
0
03 Feb 2024
Describing Images
Fast
and
Slow
\textit{Fast and Slow}
Fast and Slow
: Quantifying and Predicting the Variation in Human Signals during Visuo-Linguistic Processes
Ece Takmaz
Sandro Pezzelle
Raquel Fernández
24
1
0
02 Feb 2024
Multipath parsing in the brain
Berta Franzluebbers
Donald Dunagan
Milovs Stanojević
Jan Buys
John T. Hale
16
0
0
31 Jan 2024
Instruction-tuning Aligns LLMs to the Human Brain
Khai Loong Aw
Syrielle Montariol
Badr AlKhamissi
Martin Schrimpf
Antoine Bosselut
33
18
0
01 Dec 2023
Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue
Aron Molnar
Jaap Jumelet
Mario Giulianelli
Arabella J. Sinclair
30
2
0
21 Nov 2023
Temperature-scaling surprisal estimates improve fit to human reading times -- but does it do so for the "right reasons"?
Tong Liu
Iza vSkrjanec
Vera Demberg
48
5
0
15 Nov 2023
Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models
J. Michaelov
Catherine Arnett
Tyler A. Chang
Benjamin Bergen
36
12
0
15 Nov 2023
Psychometric Predictive Power of Large Language Models
Tatsuki Kuribayashi
Yohei Oseki
Timothy Baldwin
LM&MA
26
3
0
13 Nov 2023
Large GPT-like Models are Bad Babies: A Closer Look at the Relationship between Linguistic Competence and Psycholinguistic Measures
Julius Steuer
Marius Mosbach
Dietrich Klakow
27
10
0
08 Nov 2023
Roles of Scaling and Instruction Tuning in Language Perception: Model vs. Human Attention
Changjiang Gao
Shujian Huang
Jixing Li
Jiajun Chen
LRM
ALM
39
6
0
29 Oct 2023
Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship?
Sathvik Nair
Philip Resnik
27
10
0
26 Oct 2023
When Language Models Fall in Love: Animacy Processing in Transformer Language Models
Michael Hanna
Yonatan Belinkov
Sandro Pezzelle
27
11
0
23 Oct 2023
Information Value: Measuring Utterance Predictability as Distance from Plausible Alternatives
Mario Giulianelli
Sarenne Wallbridge
Raquel Fernández
25
13
0
20 Oct 2023
Humans and language models diverge when predicting repeating text
Aditya R. Vaidya
Javier S. Turek
Alexander G. Huth
19
6
0
10 Oct 2023
Characterizing Learning Curves During Language Model Pre-Training: Learning, Forgetting, and Stability
Tyler A. Chang
Z. Tu
Benjamin Bergen
32
11
0
29 Aug 2023
Testing the Predictions of Surprisal Theory in 11 Languages
Ethan Gotlieb Wilcox
Tiago Pimentel
Clara Meister
Ryan Cotterell
R. Levy
LRM
46
63
0
07 Jul 2023
Investigating the Utility of Surprisal from Large Language Models for Speech Synthesis Prosody
Sofoklis Kakouros
J. Šimko
M. Vainio
Antti Suni
12
5
0
16 Jun 2023
Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens
Byung-Doh Oh
William Schuler
42
25
0
22 Apr 2023
On the Effect of Anticipation on Reading Times
Tiago Pimentel
Clara Meister
Ethan Gotlieb Wilcox
R. Levy
Ryan Cotterell
39
18
0
25 Nov 2022
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
Suhas Arehalli
Brian Dillon
Tal Linzen
34
36
0
21 Oct 2022
Context Limitations Make Neural Language Models More Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
44
29
0
23 May 2022
Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
S. Ryu
Richard L. Lewis
33
25
0
26 Apr 2021
1