ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.12131
  4. Cited By
Why Does Surprisal From Larger Transformer-Based Language Models Provide
  a Poorer Fit to Human Reading Times?

Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?

23 December 2022
Byung-Doh Oh
William Schuler
ArXivPDFHTML

Papers citing "Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?"

50 / 50 papers shown
Title
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Klemen Kotar
Greta Tuckute
49
0
0
29 Apr 2025
Signatures of human-like processing in Transformer forward passes
Signatures of human-like processing in Transformer forward passes
Jennifer Hu
Michael A. Lepori
Michael Franke
AI4CE
156
0
0
18 Apr 2025
Generative Linguistics, Large Language Models, and the Social Nature of Scientific Success
Generative Linguistics, Large Language Models, and the Social Nature of Scientific Success
Sophie Hao
ELM
AI4CE
54
0
0
25 Mar 2025
Strategic resource allocation in memory encoding: An efficiency principle shaping language processing
Strategic resource allocation in memory encoding: An efficiency principle shaping language processing
Weijie Xu
Richard Futrell
55
1
0
18 Mar 2025
From Language to Cognition: How LLMs Outgrow the Human Language Network
Badr AlKhamissi
Greta Tuckute
Yingtian Tang
Taha Binhuraib
Antoine Bosselut
Martin Schrimpf
ALM
151
1
0
03 Mar 2025
Language Models Grow Less Humanlike beyond Phase Transition
Language Models Grow Less Humanlike beyond Phase Transition
Tatsuya Aoyama
Ethan Wilcox
46
1
0
26 Feb 2025
Anything Goes? A Crosslinguistic Study of (Im)possible Language Learning in LMs
Anything Goes? A Crosslinguistic Study of (Im)possible Language Learning in LMs
Xiulin Yang
Tatsuya Aoyama
Yuekun Yao
Ethan Wilcox
50
1
0
26 Feb 2025
Eye Tracking Based Cognitive Evaluation of Automatic Readability Assessment Measures
Eye Tracking Based Cognitive Evaluation of Automatic Readability Assessment Measures
Keren Gruteke Klein
Shachar Frenkel
Omer Shubi
Yevgeni Berzak
41
0
0
16 Feb 2025
Large Language Models Are Human-Like Internally
Large Language Models Are Human-Like Internally
Tatsuki Kuribayashi
Yohei Oseki
Souhaib Ben Taieb
Kentaro Inui
Timothy Baldwin
71
4
0
03 Feb 2025
Learning to Write Rationally: How Information Is Distributed in
  Non-Native Speakers' Essays
Learning to Write Rationally: How Information Is Distributed in Non-Native Speakers' Essays
Zixin Tang
Janet G. van Hell
29
0
0
05 Nov 2024
What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length
What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length
Lindia Tjuatja
Graham Neubig
Tal Linzen
Sophie Hao
39
1
0
04 Nov 2024
Towards a Similarity-adjusted Surprisal Theory
Towards a Similarity-adjusted Surprisal Theory
Clara Meister
Mario Giulianelli
Tiago Pimentel
32
3
0
23 Oct 2024
A Psycholinguistic Evaluation of Language Models' Sensitivity to
  Argument Roles
A Psycholinguistic Evaluation of Language Models' Sensitivity to Argument Roles
Eun-Kyoung Rosa Lee
Sathvik Nair
Naomi Feldman
62
4
0
21 Oct 2024
Reverse-Engineering the Reader
Reverse-Engineering the Reader
Samuel Kiegeland
Ethan Gotlieb Wilcox
Afra Amini
David Robert Reich
Ryan Cotterell
23
0
0
16 Oct 2024
Large-scale cloze evaluation reveals that token prediction tasks are
  neither lexically nor semantically aligned
Large-scale cloze evaluation reveals that token prediction tasks are neither lexically nor semantically aligned
Cassandra L. Jacobs
Loïc Grobol
Alvin Tsang
21
0
0
15 Oct 2024
The Roles of Contextual Semantic Relevance Metrics in Human Visual
  Processing
The Roles of Contextual Semantic Relevance Metrics in Human Visual Processing
Kun Sun
Rong Wang
VLM
28
0
0
13 Oct 2024
Linear Recency Bias During Training Improves Transformers' Fit to
  Reading Times
Linear Recency Bias During Training Improves Transformers' Fit to Reading Times
Christian Clark
Byung-Doh Oh
William Schuler
31
3
0
17 Sep 2024
Thesis proposal: Are We Losing Textual Diversity to Natural Language
  Processing?
Thesis proposal: Are We Losing Textual Diversity to Natural Language Processing?
Josef Jon
31
0
0
15 Sep 2024
On the Role of Context in Reading Time Prediction
On the Role of Context in Reading Time Prediction
Andreas Opedal
Eleanor Chodroff
Ryan Cotterell
Ethan Gotlieb Wilcox
33
7
0
12 Sep 2024
Brain-Like Language Processing via a Shallow Untrained Multihead
  Attention Network
Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network
Badr AlKhamissi
Greta Tuckute
Antoine Bosselut
Martin Schrimpf
76
6
0
21 Jun 2024
How to Compute the Probability of a Word
How to Compute the Probability of a Word
Tiago Pimentel
Clara Meister
24
14
0
20 Jun 2024
SCAR: Efficient Instruction-Tuning for Large Language Models via Style
  Consistency-Aware Response Ranking
SCAR: Efficient Instruction-Tuning for Large Language Models via Style Consistency-Aware Response Ranking
Zhuang Li
Yuncheng Hua
Thuy-Trang Vu
Haolan Zhan
Lizhen Qu
Gholamreza Haffari
54
2
0
16 Jun 2024
Leading Whitespaces of Language Models' Subword Vocabulary Poses a
  Confound for Calculating Word Probabilities
Leading Whitespaces of Language Models' Subword Vocabulary Poses a Confound for Calculating Word Probabilities
Byung-Doh Oh
William Schuler
25
14
0
16 Jun 2024
Language models emulate certain cognitive profiles: An investigation of
  how predictability measures interact with individual differences
Language models emulate certain cognitive profiles: An investigation of how predictability measures interact with individual differences
Patrick Haller
Lena S. Bolliger
Lena Ann Jäger
36
1
0
07 Jun 2024
Filtered Corpus Training (FiCT) Shows that Language Models can
  Generalize from Indirect Evidence
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
Abhinav Patil
Jaap Jumelet
Yu Ying Chiu
Andy Lapastora
Peter Shen
Lexie Wang
Clevis Willrich
Shane Steinert-Threlkeld
35
13
0
24 May 2024
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting
  Human Language Comprehension Metrics
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics
J. Michaelov
Catherine Arnett
Benjamin Bergen
34
3
0
30 Apr 2024
Computational Sentence-level Metrics Predicting Human Sentence
  Comprehension
Computational Sentence-level Metrics Predicting Human Sentence Comprehension
Kun Sun
Rong Wang
46
0
0
23 Mar 2024
Emergent Word Order Universals from Cognitively-Motivated Language
  Models
Emergent Word Order Universals from Cognitively-Motivated Language Models
Tatsuki Kuribayashi
Ryo Ueda
Ryosuke Yoshida
Yohei Oseki
Ted Briscoe
Timothy Baldwin
36
2
0
19 Feb 2024
Frequency Explains the Inverse Correlation of Large Language Models'
  Size, Training Data Amount, and Surprisal's Fit to Reading Times
Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times
Byung-Doh Oh
Shisen Yue
William Schuler
53
14
0
03 Feb 2024
Describing Images $\textit{Fast and Slow}$: Quantifying and Predicting
  the Variation in Human Signals during Visuo-Linguistic Processes
Describing Images Fast and Slow\textit{Fast and Slow}Fast and Slow: Quantifying and Predicting the Variation in Human Signals during Visuo-Linguistic Processes
Ece Takmaz
Sandro Pezzelle
Raquel Fernández
24
1
0
02 Feb 2024
Multipath parsing in the brain
Multipath parsing in the brain
Berta Franzluebbers
Donald Dunagan
Milovs Stanojević
Jan Buys
John T. Hale
16
0
0
31 Jan 2024
Instruction-tuning Aligns LLMs to the Human Brain
Instruction-tuning Aligns LLMs to the Human Brain
Khai Loong Aw
Syrielle Montariol
Badr AlKhamissi
Martin Schrimpf
Antoine Bosselut
33
18
0
01 Dec 2023
Attribution and Alignment: Effects of Local Context Repetition on
  Utterance Production and Comprehension in Dialogue
Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue
Aron Molnar
Jaap Jumelet
Mario Giulianelli
Arabella J. Sinclair
30
2
0
21 Nov 2023
Temperature-scaling surprisal estimates improve fit to human reading
  times -- but does it do so for the "right reasons"?
Temperature-scaling surprisal estimates improve fit to human reading times -- but does it do so for the "right reasons"?
Tong Liu
Iza vSkrjanec
Vera Demberg
45
5
0
15 Nov 2023
Structural Priming Demonstrates Abstract Grammatical Representations in
  Multilingual Language Models
Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models
J. Michaelov
Catherine Arnett
Tyler A. Chang
Benjamin Bergen
36
12
0
15 Nov 2023
Psychometric Predictive Power of Large Language Models
Psychometric Predictive Power of Large Language Models
Tatsuki Kuribayashi
Yohei Oseki
Timothy Baldwin
LM&MA
24
3
0
13 Nov 2023
Large GPT-like Models are Bad Babies: A Closer Look at the Relationship
  between Linguistic Competence and Psycholinguistic Measures
Large GPT-like Models are Bad Babies: A Closer Look at the Relationship between Linguistic Competence and Psycholinguistic Measures
Julius Steuer
Marius Mosbach
Dietrich Klakow
24
10
0
08 Nov 2023
Roles of Scaling and Instruction Tuning in Language Perception: Model
  vs. Human Attention
Roles of Scaling and Instruction Tuning in Language Perception: Model vs. Human Attention
Changjiang Gao
Shujian Huang
Jixing Li
Jiajun Chen
LRM
ALM
37
6
0
29 Oct 2023
Words, Subwords, and Morphemes: What Really Matters in the
  Surprisal-Reading Time Relationship?
Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship?
Sathvik Nair
Philip Resnik
25
10
0
26 Oct 2023
When Language Models Fall in Love: Animacy Processing in Transformer
  Language Models
When Language Models Fall in Love: Animacy Processing in Transformer Language Models
Michael Hanna
Yonatan Belinkov
Sandro Pezzelle
24
11
0
23 Oct 2023
Information Value: Measuring Utterance Predictability as Distance from
  Plausible Alternatives
Information Value: Measuring Utterance Predictability as Distance from Plausible Alternatives
Mario Giulianelli
Sarenne Wallbridge
Raquel Fernández
25
13
0
20 Oct 2023
Humans and language models diverge when predicting repeating text
Humans and language models diverge when predicting repeating text
Aditya R. Vaidya
Javier S. Turek
Alexander G. Huth
17
6
0
10 Oct 2023
Characterizing Learning Curves During Language Model Pre-Training:
  Learning, Forgetting, and Stability
Characterizing Learning Curves During Language Model Pre-Training: Learning, Forgetting, and Stability
Tyler A. Chang
Z. Tu
Benjamin Bergen
32
11
0
29 Aug 2023
Testing the Predictions of Surprisal Theory in 11 Languages
Testing the Predictions of Surprisal Theory in 11 Languages
Ethan Gotlieb Wilcox
Tiago Pimentel
Clara Meister
Ryan Cotterell
R. Levy
LRM
44
63
0
07 Jul 2023
Investigating the Utility of Surprisal from Large Language Models for
  Speech Synthesis Prosody
Investigating the Utility of Surprisal from Large Language Models for Speech Synthesis Prosody
Sofoklis Kakouros
J. Šimko
M. Vainio
Antti Suni
10
5
0
16 Jun 2023
Transformer-Based Language Model Surprisal Predicts Human Reading Times
  Best with About Two Billion Training Tokens
Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens
Byung-Doh Oh
William Schuler
40
25
0
22 Apr 2023
On the Effect of Anticipation on Reading Times
On the Effect of Anticipation on Reading Times
Tiago Pimentel
Clara Meister
Ethan Gotlieb Wilcox
R. Levy
Ryan Cotterell
37
18
0
25 Nov 2022
Syntactic Surprisal From Neural Models Predicts, But Underestimates,
  Human Processing Difficulty From Syntactic Ambiguities
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
Suhas Arehalli
Brian Dillon
Tal Linzen
31
36
0
21 Oct 2022
Context Limitations Make Neural Language Models More Human-Like
Context Limitations Make Neural Language Models More Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
44
29
0
23 May 2022
Accounting for Agreement Phenomena in Sentence Comprehension with
  Transformer Language Models: Effects of Similarity-based Interference on
  Surprisal and Attention
Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
S. Ryu
Richard L. Lewis
31
25
0
26 Apr 2021
1