ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.08924
  4. Cited By
Escaping Collapse: The Strength of Weak Data for Large Language Model Training

Escaping Collapse: The Strength of Weak Data for Large Language Model Training

13 February 2025
Kareem Amin
Sara Babakniya
Alex Bie
Weiwei Kong
Umar Syed
Sergei Vassilvitskii
ArXivPDFHTML

Papers citing "Escaping Collapse: The Strength of Weak Data for Large Language Model Training"

10 / 10 papers shown
Title
What Has Been Lost with Synthetic Evaluation?
What Has Been Lost with Synthetic Evaluation?
Alexander Gill
Abhilasha Ravichander
Ana Marasović
ELM
7
0
0
28 May 2025
When Models Don't Collapse: On the Consistency of Iterative MLE
When Models Don't Collapse: On the Consistency of Iterative MLE
Daniel Barzilai
Ohad Shamir
SyDa
10
0
0
25 May 2025
Rate of Model Collapse in Recursive Training
Rate of Model Collapse in Recursive Training
A. Suresh
A. Thangaraj
Aditya Nanda Kishore Khandavally
SyDa
53
6
0
23 Dec 2024
Is Model Collapse Inevitable? Breaking the Curse of Recursion by
  Accumulating Real and Synthetic Data
Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data
Matthias Gerstgrasser
Rylan Schaeffer
Apratim Dey
Rafael Rafailov
Henry Sleight
...
Andrey Gromov
Daniel A. Roberts
Diyi Yang
D. Donoho
Oluwasanmi Koyejo
62
61
0
01 Apr 2024
Self-Distillation Bridges Distribution Gap in Language Model Fine-Tuning
Self-Distillation Bridges Distribution Gap in Language Model Fine-Tuning
Zhaorui Yang
Tianyu Pang
Hao Feng
Han Wang
Wei Chen
Minfeng Zhu
Qian Liu
ALM
46
44
0
21 Feb 2024
Self-Rewarding Language Models
Self-Rewarding Language Models
Weizhe Yuan
Richard Yuanzhe Pang
Kyunghyun Cho
Xian Li
Sainbayar Sukhbaatar
Jing Xu
Jason Weston
ReLM
SyDa
ALM
LRM
269
312
0
18 Jan 2024
Constitutional AI: Harmlessness from AI Feedback
Constitutional AI: Harmlessness from AI Feedback
Yuntao Bai
Saurav Kadavath
Sandipan Kundu
Amanda Askell
John Kernion
...
Dario Amodei
Nicholas Joseph
Sam McCandlish
Tom B. Brown
Jared Kaplan
SyDa
MoMe
138
1,552
0
15 Dec 2022
Large Language Models Can Self-Improve
Large Language Models Can Self-Improve
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLM
AI4MH
LRM
80
594
0
20 Oct 2022
STaR: Bootstrapping Reasoning With Reasoning
STaR: Bootstrapping Reasoning With Reasoning
E. Zelikman
Yuhuai Wu
Jesse Mu
Noah D. Goodman
ReLM
LRM
68
459
0
28 Mar 2022
Program Synthesis with Large Language Models
Program Synthesis with Large Language Models
Jacob Austin
Augustus Odena
Maxwell Nye
Maarten Bosma
Henryk Michalewski
...
Ellen Jiang
Carrie J. Cai
Michael Terry
Quoc V. Le
Charles Sutton
ELM
AIMat
ReCod
ALM
68
1,846
0
16 Aug 2021
1