Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.17774
Cited By
Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship?
26 October 2023
Sathvik Nair
Philip Resnik
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship?"
6 / 6 papers shown
Title
Large-scale cloze evaluation reveals that token prediction tasks are neither lexically nor semantically aligned
Cassandra L. Jacobs
Loïc Grobol
Alvin Tsang
21
0
0
15 Oct 2024
Leading Whitespaces of Language Models' Subword Vocabulary Poses a Confound for Calculating Word Probabilities
Byung-Doh Oh
William Schuler
27
14
0
16 Jun 2024
Evaluating Subword Tokenization: Alien Subword Composition and OOV Generalization Challenge
Khuyagbaatar Batsuren
Ekaterina Vylomova
Verna Dankers
Tsetsuukhei Delgerbaatar
Omri Uzan
Yuval Pinter
Gábor Bella
29
9
0
20 Apr 2024
Temperature-scaling surprisal estimates improve fit to human reading times -- but does it do so for the "right reasons"?
Tong Liu
Iza vSkrjanec
Vera Demberg
48
5
0
15 Nov 2023
Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
S. Ryu
Richard L. Lewis
36
25
0
26 Apr 2021
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,746
0
26 Sep 2016
1