ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.16142
  4. Cited By
A Language Model with Limited Memory Capacity Captures Interference in
  Human Sentence Processing

A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing

24 October 2023
William Timkey
Tal Linzen
ArXivPDFHTML

Papers citing "A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing"

12 / 12 papers shown
Title
The Mind in the Machine: A Survey of Incorporating Psychological Theories in LLMs
The Mind in the Machine: A Survey of Incorporating Psychological Theories in LLMs
Zizhou Liu
Ziwei Gong
Lin Ai
Zheng Hui
Run Chen
Colin Wayne Leach
Michelle R. Greene
Julia Hirschberg
LLMAG
156
0
0
28 Mar 2025
Strategic resource allocation in memory encoding: An efficiency principle shaping language processing
Strategic resource allocation in memory encoding: An efficiency principle shaping language processing
Weijie Xu
Richard Futrell
55
1
0
18 Mar 2025
A Psycholinguistic Evaluation of Language Models' Sensitivity to
  Argument Roles
A Psycholinguistic Evaluation of Language Models' Sensitivity to Argument Roles
Eun-Kyoung Rosa Lee
Sathvik Nair
Naomi Feldman
62
4
0
21 Oct 2024
Linear Recency Bias During Training Improves Transformers' Fit to
  Reading Times
Linear Recency Bias During Training Improves Transformers' Fit to Reading Times
Christian Clark
Byung-Doh Oh
William Schuler
36
3
0
17 Sep 2024
Testing learning hypotheses using neural networks by manipulating
  learning data
Testing learning hypotheses using neural networks by manipulating learning data
Cara Su-Yi Leong
Tal Linzen
31
4
0
05 Jul 2024
What Makes Language Models Good-enough?
What Makes Language Models Good-enough?
Daiki Asami
Saku Sugawara
34
1
0
06 Jun 2024
From Form(s) to Meaning: Probing the Semantic Depths of Language Models
  Using Multisense Consistency
From Form(s) to Meaning: Probing the Semantic Depths of Language Models Using Multisense Consistency
Xenia Ohmer
Elia Bruni
Dieuwke Hupkes
AI4CE
36
6
0
18 Apr 2024
Psychometric Predictive Power of Large Language Models
Psychometric Predictive Power of Large Language Models
Tatsuki Kuribayashi
Yohei Oseki
Timothy Baldwin
LM&MA
29
3
0
13 Nov 2023
Syntactic Surprisal From Neural Models Predicts, But Underestimates,
  Human Processing Difficulty From Syntactic Ambiguities
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
Suhas Arehalli
Brian Dillon
Tal Linzen
34
36
0
21 Oct 2022
Context Limitations Make Neural Language Models More Human-Like
Context Limitations Make Neural Language Models More Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
44
29
0
23 May 2022
Accounting for Agreement Phenomena in Sentence Comprehension with
  Transformer Language Models: Effects of Similarity-based Interference on
  Surprisal and Attention
Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
S. Ryu
Richard L. Lewis
36
25
0
26 Apr 2021
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,925
0
17 Aug 2015
1