ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20380
  4. Cited By
GRAPE: Optimize Data Mixture for Group Robust Multi-target Adaptive Pretraining

GRAPE: Optimize Data Mixture for Group Robust Multi-target Adaptive Pretraining

26 May 2025
Simin Fan
Maria Ios Glarou
Martin Jaggi
    VLM
ArXiv (abs)PDFHTML

Papers citing "GRAPE: Optimize Data Mixture for Group Robust Multi-target Adaptive Pretraining"

3 / 3 papers shown
Title
CLIMB: CLustering-based Iterative Data Mixture Bootstrapping for Language Model Pre-training
CLIMB: CLustering-based Iterative Data Mixture Bootstrapping for Language Model Pre-training
Shizhe Diao
Yu Yang
Y. Fu
Xin Dong
Dan Su
...
Hongxu Yin
M. Patwary
Yingyan
Jan Kautz
Pavlo Molchanov
122
2
0
17 Apr 2025
Task-Adaptive Pretrained Language Models via Clustered-Importance Sampling
Task-Adaptive Pretrained Language Models via Clustered-Importance Sampling
David Grangier
Simin Fan
Skyler Seto
Pierre Ablin
205
5
0
30 Sep 2024
RegMix: Data Mixture as Regression for Language Model Pre-training
RegMix: Data Mixture as Regression for Language Model Pre-training
Qian Liu
Xiaosen Zheng
Niklas Muennighoff
Guangtao Zeng
Longxu Dou
Tianyu Pang
Jing Jiang
Min Lin
MoE
172
54
1
01 Jul 2024
1