ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.04666
  4. Cited By
Pre-training LLMs using human-like development data corpus
v1v2v3v4 (latest)

Pre-training LLMs using human-like development data corpus

8 November 2023
Khushi Bhardwaj
Raj Sanjay Shah
Sashank Varma
ArXiv (abs)PDFHTML

Papers citing "Pre-training LLMs using human-like development data corpus"

5 / 5 papers shown
Title
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Alex Warstadt
Aaron Mueller
Leshem Choshen
E. Wilcox
Chengxu Zhuang
...
Rafael Mosquera
Bhargavi Paranjape
Adina Williams
Tal Linzen
Ryan Cotterell
202
121
0
10 Apr 2025
Context-Aware Toxicity Detection in Multiplayer Games: Integrating Domain-Adaptive Pretraining and Match Metadata
Context-Aware Toxicity Detection in Multiplayer Games: Integrating Domain-Adaptive Pretraining and Match Metadata
Adrien Schurger-Foy
Rafal Kocielnik
Caglar Gulcehre
R. Alvarez
64
0
0
02 Apr 2025
The potential -- and the pitfalls -- of using pre-trained language models as cognitive science theories
The potential -- and the pitfalls -- of using pre-trained language models as cognitive science theories
Raj Sanjay Shah
Sashank Varma
LRM
177
1
0
22 Jan 2025
When Search Engine Services meet Large Language Models: Visions and
  Challenges
When Search Engine Services meet Large Language Models: Visions and Challenges
Haoyi Xiong
Jiang Bian
Yuchen Li
Xuhong Li
Jundong Li
Shuaiqiang Wang
D. Yin
Sumi Helal
135
36
0
28 Jun 2024
Incremental Comprehension of Garden-Path Sentences by Large Language
  Models: Semantic Interpretation, Syntactic Re-Analysis, and Attention
Incremental Comprehension of Garden-Path Sentences by Large Language Models: Semantic Interpretation, Syntactic Re-Analysis, and Attention
Andrew Li
Xianle Feng
Siddhant Narang
Austin Peng
Tianle Cai
Raj Sanjay Shah
Sashank Varma
LRM
66
6
0
25 May 2024
1