Minifinetuning: Low-Data Generation Domain Adaptation through Corrective Self-Distillation
- ALM

Finetuning language models for a new domain inevitably leads to the deterioration of their general performance. This becomes more pronounced the more limited the finetuning data resource.We introduce minifinetuning (MFT), a method for language model domain adaptation that considerably reduces the effects of overfitting-induced degeneralization in low-data settings and which does so in the absence of any pre-training data for replay. MFT demonstrates 2-10x more favourable specialization-to-degeneralization ratios than standard finetuning across a wide range of models and domains and exhibits an intrinsic robustness to overfitting when data in the new domain is scarce and down to as little as 500 samples.Employing corrective self-distillation that is individualized on the sample level, MFT outperforms parameter-efficient finetuning methods, demonstrates replay-like degeneralization mitigation properties, and is composable with either for a combined effect.
View on arXiv@article{belcak2025_2506.15702, title={ Minifinetuning: Low-Data Generation Domain Adaptation through Corrective Self-Distillation }, author={ Peter Belcak and Greg Heinrich and Jan Kautz and Pavlo Molchanov }, journal={arXiv preprint arXiv:2506.15702}, year={ 2025 } }