ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.03617
  4. Cited By
Is Child-Directed Speech Effective Training Data for Language Models?

Is Child-Directed Speech Effective Training Data for Language Models?

7 August 2024
Steven Y. Feng
Noah D. Goodman
Michael C. Frank
ArXivPDFHTML

Papers citing "Is Child-Directed Speech Effective Training Data for Language Models?"

6 / 6 papers shown
Title
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Alex Warstadt
Aaron Mueller
Leshem Choshen
E. Wilcox
Chengxu Zhuang
...
Rafael Mosquera
Bhargavi Paranjape
Adina Williams
Tal Linzen
Ryan Cotterell
148
120
0
10 Apr 2025
Language Models Learn Rare Phenomena from Less Rare Phenomena: The Case
  of the Missing AANNs
Language Models Learn Rare Phenomena from Less Rare Phenomena: The Case of the Missing AANNs
Kanishka Misra
Kyle Mahowald
83
27
0
28 Mar 2024
A systematic investigation of learnability from single child linguistic
  input
A systematic investigation of learnability from single child linguistic input
Yulu Qin
Wentao Wang
Brenden M. Lake
83
5
0
12 Feb 2024
Can training neural language models on a curriculum with developmentally
  plausible data improve alignment with human reading behavior?
Can training neural language models on a curriculum with developmentally plausible data improve alignment with human reading behavior?
Aryaman Chobey
Oliver Smith
Anzi Wang
Grusha Prasad
101
5
0
30 Nov 2023
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes
Chengxu Zhuang
Evelina Fedorenko
Jacob Andreas
37
12
0
20 Oct 2023
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
526
24,351
0
26 Jul 2019
1