ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.11389
  4. Cited By
Transformer-Based Language Model Surprisal Predicts Human Reading Times
  Best with About Two Billion Training Tokens

Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens

22 April 2023
Byung-Doh Oh
William Schuler
ArXivPDFHTML

Papers citing "Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens"

19 / 19 papers shown
Title
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Klemen Kotar
Greta Tuckute
51
0
0
29 Apr 2025
Anything Goes? A Crosslinguistic Study of (Im)possible Language Learning in LMs
Anything Goes? A Crosslinguistic Study of (Im)possible Language Learning in LMs
Xiulin Yang
Tatsuya Aoyama
Yuekun Yao
Ethan Wilcox
50
1
0
26 Feb 2025
Language Models Grow Less Humanlike beyond Phase Transition
Language Models Grow Less Humanlike beyond Phase Transition
Tatsuya Aoyama
Ethan Wilcox
46
1
0
26 Feb 2025
Large Language Models Are Human-Like Internally
Large Language Models Are Human-Like Internally
Tatsuki Kuribayashi
Yohei Oseki
Souhaib Ben Taieb
Kentaro Inui
Timothy Baldwin
71
4
0
03 Feb 2025
Reverse-Engineering the Reader
Reverse-Engineering the Reader
Samuel Kiegeland
Ethan Gotlieb Wilcox
Afra Amini
David Robert Reich
Ryan Cotterell
23
0
0
16 Oct 2024
Large-scale cloze evaluation reveals that token prediction tasks are
  neither lexically nor semantically aligned
Large-scale cloze evaluation reveals that token prediction tasks are neither lexically nor semantically aligned
Cassandra L. Jacobs
Loïc Grobol
Alvin Tsang
21
0
0
15 Oct 2024
Linear Recency Bias During Training Improves Transformers' Fit to
  Reading Times
Linear Recency Bias During Training Improves Transformers' Fit to Reading Times
Christian Clark
Byung-Doh Oh
William Schuler
39
3
0
17 Sep 2024
On the Role of Context in Reading Time Prediction
On the Role of Context in Reading Time Prediction
Andreas Opedal
Eleanor Chodroff
Ryan Cotterell
Ethan Gotlieb Wilcox
33
7
0
12 Sep 2024
How to Compute the Probability of a Word
How to Compute the Probability of a Word
Tiago Pimentel
Clara Meister
37
14
0
20 Jun 2024
Leading Whitespaces of Language Models' Subword Vocabulary Poses a
  Confound for Calculating Word Probabilities
Leading Whitespaces of Language Models' Subword Vocabulary Poses a Confound for Calculating Word Probabilities
Byung-Doh Oh
William Schuler
30
14
0
16 Jun 2024
Language models emulate certain cognitive profiles: An investigation of
  how predictability measures interact with individual differences
Language models emulate certain cognitive profiles: An investigation of how predictability measures interact with individual differences
Patrick Haller
Lena S. Bolliger
Lena Ann Jäger
42
1
0
07 Jun 2024
Filtered Corpus Training (FiCT) Shows that Language Models can
  Generalize from Indirect Evidence
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
Abhinav Patil
Jaap Jumelet
Yu Ying Chiu
Andy Lapastora
Peter Shen
Lexie Wang
Clevis Willrich
Shane Steinert-Threlkeld
35
13
0
24 May 2024
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting
  Human Language Comprehension Metrics
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics
J. Michaelov
Catherine Arnett
Benjamin Bergen
34
3
0
30 Apr 2024
From "um" to "yeah": Producing, predicting, and regulating information
  flow in human conversation
From "um" to "yeah": Producing, predicting, and regulating information flow in human conversation
Claire Bergey
Simon DeDeo
18
2
0
13 Mar 2024
Frequency Explains the Inverse Correlation of Large Language Models'
  Size, Training Data Amount, and Surprisal's Fit to Reading Times
Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times
Byung-Doh Oh
Shisen Yue
William Schuler
53
15
0
03 Feb 2024
Describing Images $\textit{Fast and Slow}$: Quantifying and Predicting
  the Variation in Human Signals during Visuo-Linguistic Processes
Describing Images Fast and Slow\textit{Fast and Slow}Fast and Slow: Quantifying and Predicting the Variation in Human Signals during Visuo-Linguistic Processes
Ece Takmaz
Sandro Pezzelle
Raquel Fernández
24
1
0
02 Feb 2024
Temperature-scaling surprisal estimates improve fit to human reading
  times -- but does it do so for the "right reasons"?
Temperature-scaling surprisal estimates improve fit to human reading times -- but does it do so for the "right reasons"?
Tong Liu
Iza vSkrjanec
Vera Demberg
48
5
0
15 Nov 2023
Context Limitations Make Neural Language Models More Human-Like
Context Limitations Make Neural Language Models More Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
44
29
0
23 May 2022
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
282
1,996
0
31 Dec 2020
1