Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.04547
Cited By
Large GPT-like Models are Bad Babies: A Closer Look at the Relationship between Linguistic Competence and Psycholinguistic Measures
8 November 2023
Julius Steuer
Marius Mosbach
Dietrich Klakow
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Large GPT-like Models are Bad Babies: A Closer Look at the Relationship between Linguistic Competence and Psycholinguistic Measures"
14 / 14 papers shown
Title
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Klemen Kotar
Greta Tuckute
51
0
0
29 Apr 2025
From Language to Cognition: How LLMs Outgrow the Human Language Network
Badr AlKhamissi
Greta Tuckute
Yingtian Tang
Taha Binhuraib
Antoine Bosselut
Martin Schrimpf
ALM
172
1
0
03 Mar 2025
A Psycholinguistic Evaluation of Language Models' Sensitivity to Argument Roles
Eun-Kyoung Rosa Lee
Sathvik Nair
Naomi Feldman
62
4
0
21 Oct 2024
Reverse-Engineering the Reader
Samuel Kiegeland
Ethan Gotlieb Wilcox
Afra Amini
David Robert Reich
Ryan Cotterell
23
0
0
16 Oct 2024
Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network
Badr AlKhamissi
Greta Tuckute
Antoine Bosselut
Martin Schrimpf
76
6
0
21 Jun 2024
Leading Whitespaces of Language Models' Subword Vocabulary Poses a Confound for Calculating Word Probabilities
Byung-Doh Oh
William Schuler
27
14
0
16 Jun 2024
Instruction-tuning Aligns LLMs to the Human Brain
Khai Loong Aw
Syrielle Montariol
Badr AlKhamissi
Martin Schrimpf
Antoine Bosselut
33
18
0
01 Dec 2023
To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis
Fuzhao Xue
Yao Fu
Wangchunshu Zhou
Zangwei Zheng
Yang You
83
78
0
22 May 2023
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
Suhas Arehalli
Brian Dillon
Tal Linzen
34
36
0
21 Oct 2022
Context Limitations Make Neural Language Models More Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
44
29
0
23 May 2022
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
253
698
0
27 Aug 2021
Shortformer: Better Language Modeling using Shorter Inputs
Ofir Press
Noah A. Smith
M. Lewis
230
89
0
31 Dec 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
264
4,489
0
23 Jan 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1