Papers
Communities
Organizations
Events
Blog
Pricing
Search
Open menu
Home
Papers
2212.11185
Cited By
Entropy- and Distance-Based Predictors From GPT-2 Attention Patterns Predict Reading Times Over and Above GPT-2 Surprisal
21 December 2022
Byung-Doh Oh
William Schuler
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Entropy- and Distance-Based Predictors From GPT-2 Attention Patterns Predict Reading Times Over and Above GPT-2 Surprisal"
9 / 9 papers shown
Title
Do Large Language Models know who did what to whom?
Joseph M. Denning
Xiaohan
Bryor Snefjella
Idan A. Blank
286
1
0
23 Apr 2025
Learning to Write Rationally: How Information Is Distributed in Non-Native Speakers' Essays
Zixin Tang
Janet G. van Hell
85
0
0
05 Nov 2024
Linear Recency Bias During Training Improves Transformers' Fit to Reading Times
Christian Clark
Byung-Doh Oh
William Schuler
112
3
0
17 Sep 2024
Predicting Human Translation Difficulty with Neural Machine Translation
Zheng Wei Lim
Ekaterina Vylomova
Charles Kemp
Trevor Cohn
105
0
0
19 Dec 2023
A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing
William Timkey
Tal Linzen
59
17
0
24 Oct 2023
Training-free Diffusion Model Adaptation for Variable-Sized Text-to-Image Synthesis
Zhiyu Jin
Xuli Shen
Bin Li
Xiangyang Xue
89
38
0
14 Jun 2023
Security Knowledge-Guided Fuzzing of Deep Learning Libraries
Nima Shiri Harzevili
Mohammad Mahdi Mohajer
Moshi Wei
H. Pham
Song Wang
AAML
AI4CE
63
1
0
05 Jun 2023
Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Maps
Goro Kobayashi
Tatsuki Kuribayashi
Sho Yokoi
Kentaro Inui
114
16
0
01 Feb 2023
Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
Byung-Doh Oh
William Schuler
62
115
0
23 Dec 2022
1