ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.01912
  4. Cited By
On the Predictive Power of Neural Language Models for Human Real-Time
  Comprehension Behavior

On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior

2 June 2020
Ethan Gotlieb Wilcox
Jon Gauthier
Jennifer Hu
Peng Qian
R. Levy
ArXivPDFHTML

Papers citing "On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior"

31 / 31 papers shown
Title
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Klemen Kotar
Greta Tuckute
54
0
0
29 Apr 2025
Do Large Language Models know who did what to whom?
Do Large Language Models know who did what to whom?
Joseph M. Denning
Xiaohan
Bryor Snefjella
Idan A. Blank
62
1
0
23 Apr 2025
Signatures of human-like processing in Transformer forward passes
Signatures of human-like processing in Transformer forward passes
Jennifer Hu
Michael A. Lepori
Michael Franke
AI4CE
180
0
0
18 Apr 2025
Pretraining Language Models for Diachronic Linguistic Change Discovery
Pretraining Language Models for Diachronic Linguistic Change Discovery
Elisabeth Fittschen
Sabrina Li
Tom Lippincott
Leshem Choshen
Craig Messner
28
0
0
07 Apr 2025
Large Language Models Are Human-Like Internally
Large Language Models Are Human-Like Internally
Tatsuki Kuribayashi
Yohei Oseki
Souhaib Ben Taieb
Kentaro Inui
Timothy Baldwin
71
4
0
03 Feb 2025
On the Role of Context in Reading Time Prediction
On the Role of Context in Reading Time Prediction
Andreas Opedal
Eleanor Chodroff
Ryan Cotterell
Ethan Gotlieb Wilcox
33
7
0
12 Sep 2024
Large Language Models are Biased Because They Are Large Language Models
Large Language Models are Biased Because They Are Large Language Models
Philip Resnik
24
8
0
19 Jun 2024
Language models emulate certain cognitive profiles: An investigation of
  how predictability measures interact with individual differences
Language models emulate certain cognitive profiles: An investigation of how predictability measures interact with individual differences
Patrick Haller
Lena S. Bolliger
Lena Ann Jäger
42
1
0
07 Jun 2024
Filtered Corpus Training (FiCT) Shows that Language Models can
  Generalize from Indirect Evidence
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
Abhinav Patil
Jaap Jumelet
Yu Ying Chiu
Andy Lapastora
Peter Shen
Lexie Wang
Clevis Willrich
Shane Steinert-Threlkeld
35
13
0
24 May 2024
Towards a Path Dependent Account of Category Fluency
Towards a Path Dependent Account of Category Fluency
David Heineman
Reba Koenen
Sashank Varma
29
0
0
09 May 2024
Quantifying the redundancy between prosody and text
Quantifying the redundancy between prosody and text
Lukas Wolf
Tiago Pimentel
Evelina Fedorenko
Ryan Cotterell
Alex Warstadt
Ethan Gotlieb Wilcox
Tamar I. Regev
33
10
0
28 Nov 2023
Temperature-scaling surprisal estimates improve fit to human reading
  times -- but does it do so for the "right reasons"?
Temperature-scaling surprisal estimates improve fit to human reading times -- but does it do so for the "right reasons"?
Tong Liu
Iza vSkrjanec
Vera Demberg
48
5
0
15 Nov 2023
Words, Subwords, and Morphemes: What Really Matters in the
  Surprisal-Reading Time Relationship?
Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship?
Sathvik Nair
Philip Resnik
30
10
0
26 Oct 2023
Information Value: Measuring Utterance Predictability as Distance from
  Plausible Alternatives
Information Value: Measuring Utterance Predictability as Distance from Plausible Alternatives
Mario Giulianelli
Sarenne Wallbridge
Raquel Fernández
30
13
0
20 Oct 2023
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes
Chengxu Zhuang
Evelina Fedorenko
Jacob Andreas
22
10
0
20 Oct 2023
Testing the Predictions of Surprisal Theory in 11 Languages
Testing the Predictions of Surprisal Theory in 11 Languages
Ethan Gotlieb Wilcox
Tiago Pimentel
Clara Meister
Ryan Cotterell
R. Levy
LRM
52
63
0
07 Jul 2023
A Comparative Study on Textual Saliency of Styles from Eye Tracking,
  Annotations, and Language Models
A Comparative Study on Textual Saliency of Styles from Eye Tracking, Annotations, and Language Models
Karin de Langis
Dongyeop Kang
21
1
0
19 Dec 2022
A unified information-theoretic model of EEG signatures of human
  language processing
A unified information-theoretic model of EEG signatures of human language processing
Jiaxuan Li
Richard Futrell
14
1
0
16 Dec 2022
On the Effect of Anticipation on Reading Times
On the Effect of Anticipation on Reading Times
Tiago Pimentel
Clara Meister
Ethan Gotlieb Wilcox
R. Levy
Ryan Cotterell
45
18
0
25 Nov 2022
Probing for Incremental Parse States in Autoregressive Language Models
Probing for Incremental Parse States in Autoregressive Language Models
Tiwalayo Eisape
Vineet Gangireddy
R. Levy
Yoon Kim
25
11
0
17 Nov 2022
Composition, Attention, or Both?
Composition, Attention, or Both?
Ryosuke Yoshida
Yohei Oseki
CoGe
29
0
0
24 Oct 2022
Memory in humans and deep language models: Linking hypotheses for model
  augmentation
Memory in humans and deep language models: Linking hypotheses for model augmentation
Omri Raccah
Pheobe Chen
Ted Willke
David Poeppel
Vy A. Vo
RALM
23
1
0
04 Oct 2022
Language models show human-like content effects on reasoning tasks
Language models show human-like content effects on reasoning tasks
Ishita Dasgupta
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Hannah R. Sheahan
Antonia Creswell
D. Kumaran
James L. McClelland
Felix Hill
ReLM
LRM
30
181
0
14 Jul 2022
On the probability-quality paradox in language generation
On the probability-quality paradox in language generation
Clara Meister
Gian Wiher
Tiago Pimentel
Ryan Cotterell
30
14
0
31 Mar 2022
Language Models Explain Word Reading Times Better Than Empirical
  Predictability
Language Models Explain Word Reading Times Better Than Empirical Predictability
M. Hofmann
Steffen Remus
Chris Biemann
R. Radach
L. Kuchinke
38
29
0
02 Feb 2022
A surprisal--duration trade-off across and within the world's languages
A surprisal--duration trade-off across and within the world's languages
Tiago Pimentel
Clara Meister
Elizabeth Salesky
Simone Teufel
Damián E. Blasi
Ryan Cotterell
LRM
119
29
0
30 Sep 2021
Modeling Human Sentence Processing with Left-Corner Recurrent Neural
  Network Grammars
Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Ryo Yoshida
Hiroshi Noji
Yohei Oseki
34
8
0
10 Sep 2021
Different kinds of cognitive plausibility: why are transformers better
  than RNNs at predicting N400 amplitude?
Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?
J. Michaelov
Megan D. Bardolph
S. Coulson
Benjamin Bergen
18
22
0
20 Jul 2021
Can Transformer Language Models Predict Psychometric Properties?
Can Transformer Language Models Predict Psychometric Properties?
Antonio Laverghetta
Animesh Nighojkar
Jamshidbek Mirzakhalov
John Licato
LM&MA
38
14
0
12 Jun 2021
Refining Targeted Syntactic Evaluation of Language Models
Refining Targeted Syntactic Evaluation of Language Models
Benjamin Newman
Kai-Siang Ang
Julia Gong
John Hewitt
29
43
0
19 Apr 2021
Probabilistic Predictions of People Perusing: Evaluating Metrics of
  Language Model Performance for Psycholinguistic Modeling
Probabilistic Predictions of People Perusing: Evaluating Metrics of Language Model Performance for Psycholinguistic Modeling
Sophie Hao
S. Mendelsohn
Rachel Sterneck
Randi Martinez
Robert Frank
13
46
0
08 Sep 2020
1