19

Rethinking Data Mixing from the Perspective of Large Language Models

Yuanjian Xu
Tianze Sun
Changwei Xu
XinLong Zhao
Jianing Hao
Ran Chen
Yang Liu
Ruijie Xu
Stephen Chen
Guang Zhang
Main:4 Pages
5 Figures
Bibliography:2 Pages
10 Tables
Appendix:7 Pages
Abstract

Data mixing strategy is essential for large language model (LLM) training. Empirical evidence shows that inappropriate strategies can significantly reduce generalization. Although recent methods have improved empirical performance, several fundamental questions remain open: what constitutes a domain, whether human and model perceptions of domains are aligned, and how domain weighting influences generalization. We address these questions by establishing formal connections between gradient dynamics and domain distributions, offering a theoretical framework that clarifies the role of domains in training dynamics. Building on this analysis, we introduce DoGraph, a reweighting framework that formulates data scheduling as a graph-constrained optimization problem. Extensive experiments on GPT-2 models of varying scales demonstrate that DoGraph consistently achieves competitive performance.

View on arXiv
Comments on this paper