ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.04408
42
0

Unpacking Let Alone: Human-Scale Models Generalize to a Rare Construction in Form but not Meaning

4 June 2025
Wesley Scivetti
Tatsuya Aoyama
Ethan Wilcox
Nathan Schneider
ArXiv (abs)PDFHTML
Abstract

Humans have a remarkable ability to acquire and understand grammatical phenomena that are seen rarely, if ever, during childhood. Recent evidence suggests that language models with human-scale pretraining data may possess a similar ability by generalizing from frequent to rare constructions. However, it remains an open question how widespread this generalization ability is, and to what extent this knowledge extends to meanings of rare constructions, as opposed to just their forms. We fill this gap by testing human-scale transformer language models on their knowledge of both the form and meaning of the (rare and quirky) English LET-ALONE construction. To evaluate our LMs we construct a bespoke synthetic benchmark that targets syntactic and semantic properties of the construction. We find that human-scale LMs are sensitive to form, even when related constructions are filtered from the dataset. However, human-scale LMs do not make correct generalizations about LET-ALONE's meaning. These results point to an asymmetry in the current architectures' sample efficiency between language form and meaning, something which is not present in human language learners.

View on arXiv
@article{scivetti2025_2506.04408,
  title={ Unpacking Let Alone: Human-Scale Models Generalize to a Rare Construction in Form but not Meaning },
  author={ Wesley Scivetti and Tatsuya Aoyama and Ethan Wilcox and Nathan Schneider },
  journal={arXiv preprint arXiv:2506.04408},
  year={ 2025 }
}
Main:8 Pages
3 Figures
Bibliography:2 Pages
8 Tables
Appendix:1 Pages
Comments on this paper