ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.06803
  4. Cited By
Decomposition of surprisal: Unified computational model of ERP
  components in language processing

Decomposition of surprisal: Unified computational model of ERP components in language processing

10 September 2024
Jiaxuan Li
Richard Futrell
ArXiv (abs)PDFHTML

Papers citing "Decomposition of surprisal: Unified computational model of ERP components in language processing"

7 / 7 papers shown
Title
Testing the Predictions of Surprisal Theory in 11 Languages
Testing the Predictions of Surprisal Theory in 11 Languages
Ethan Gotlieb Wilcox
Tiago Pimentel
Clara Meister
Ryan Cotterell
R. Levy
LRM
88
69
0
07 Jul 2023
So Cloze yet so Far: N400 Amplitude is Better Predicted by
  Distributional Information than Human Predictability Judgements
So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements
J. Michaelov
S. Coulson
Benjamin Bergen
55
44
0
02 Sep 2021
Different kinds of cognitive plausibility: why are transformers better
  than RNNs at predicting N400 amplitude?
Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?
J. Michaelov
Megan D. Bardolph
S. Coulson
Benjamin Bergen
48
23
0
20 Jul 2021
How well does surprisal explain N400 amplitude under different
  experimental conditions?
How well does surprisal explain N400 amplitude under different experimental conditions?
J. Michaelov
Benjamin Bergen
38
41
0
09 Oct 2020
On the Predictive Power of Neural Language Models for Human Real-Time
  Comprehension Behavior
On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior
Ethan Gotlieb Wilcox
Jon Gauthier
Jennifer Hu
Peng Qian
R. Levy
45
169
0
02 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
798
42,055
0
28 May 2020
BERTs of a feather do not generalize together: Large variability in
  generalization across models with similar test set performance
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance
R. Thomas McCoy
Junghyun Min
Tal Linzen
96
150
0
07 Nov 2019
1