Papers
Communities
Organizations
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.09471
Cited By
v1
v2 (latest)
Human Sentence Processing: Recurrence or Attention?
19 May 2020
Danny Merkx
S. Frank
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Human Sentence Processing: Recurrence or Attention?"
23 / 23 papers shown
Title
Surprisal from Larger Transformer-based Language Models Predicts fMRI Data More Poorly
Yi-Chien Lin
William Schuler
18
0
0
12 Jun 2025
Do Large Language Models know who did what to whom?
Joseph M. Denning
Xiaohan
Bryor Snefjella
Idan A. Blank
286
1
0
23 Apr 2025
Large Language Models Are Human-Like Internally
Tatsuki Kuribayashi
Yohei Oseki
Souhaib Ben Taieb
Kentaro Inui
Timothy Baldwin
195
5
0
03 Feb 2025
Are words equally surprising in audio and audio-visual comprehension?
Pranava Madhyastha
Ye Zhang
G. Vigliocco
29
1
0
14 Jul 2023
Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens
Byung-Doh Oh
William Schuler
115
31
0
22 Apr 2023
Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
Byung-Doh Oh
William Schuler
62
115
0
23 Dec 2022
Entropy- and Distance-Based Predictors From GPT-2 Attention Patterns Predict Reading Times Over and Above GPT-2 Surprisal
Byung-Doh Oh
William Schuler
59
21
0
21 Dec 2022
Rarely a problem? Language models exhibit inverse scaling in their predictions following few-type quantifiers
J. Michaelov
Benjamin Bergen
46
17
0
16 Dec 2022
Collateral facilitation in humans and language models
J. Michaelov
Benjamin Bergen
118
11
0
09 Nov 2022
A Comprehensive Comparison of Neural Networks as Cognitive Models of Inflection
Adam Wiemerslage
Shiran Dudy
Katharina Kann
99
4
0
22 Oct 2022
Eye-tracking based classification of Mandarin Chinese readers with and without dyslexia using neural sequence models
Patrick Haller
Andreas Säuberli
Sarah Elisabeth Kiener
Jinger Pan
Ming Yan
Lena Jäger
96
14
0
18 Oct 2022
Construction Repetition Reduces Information Rate in Dialogue
Mario Giulianelli
Arabella J. Sinclair
Raquel Fernández
68
7
0
15 Oct 2022
Do language models make human-like predictions about the coreferents of Italian anaphoric zero pronouns?
J. Michaelov
Benjamin Bergen
42
6
0
30 Aug 2022
Context Limitations Make Neural Language Models More Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
92
31
0
23 May 2022
Predicting Human Psychometric Properties Using Computational Language Models
Antonio Laverghetta
Animesh Nighojkar
Jamshidbek Mirzakhalov
John Licato
64
9
0
12 May 2022
Continuous-Time Audiovisual Fusion with Recurrence vs. Attention for In-The-Wild Affect Recognition
Vincent Karas
M. Tellamekala
Adria Mallol-Ragolta
Michel Valstar
Björn W. Schuller
94
13
0
24 Mar 2022
minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models
Kanishka Misra
95
63
0
24 Mar 2022
Measuring the Impact of (Psycho-)Linguistic and Readability Features and Their Spill Over Effects on the Prediction of Eye Movement Patterns
Daniel Wiechmann
Yu Qiao
E. Kerz
Justus Mattern
45
14
0
15 Mar 2022
Incorporating Residual and Normalization Layers into Analysis of Masked Language Models
Goro Kobayashi
Tatsuki Kuribayashi
Sho Yokoi
Kentaro Inui
240
49
0
15 Sep 2021
Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Ryo Yoshida
Hiroshi Noji
Yohei Oseki
86
9
0
10 Sep 2021
So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements
J. Michaelov
S. Coulson
Benjamin Bergen
73
44
0
02 Sep 2021
Can Transformer Language Models Predict Psychometric Properties?
Antonio Laverghetta
Animesh Nighojkar
Jamshidbek Mirzakhalov
John Licato
LM&MA
71
14
0
12 Jun 2021
Lower Perplexity is Not Always Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Takumi Ito
Ryo Yoshida
Masayuki Asahara
Kentaro Inui
68
77
0
02 Jun 2021
1