Dataset distillation for memorized data: Soft labels can leak held-out teacher knowledge
- DDFedML

Dataset distillation aims to compress training data into fewer examples via a teacher, from which a student can learn effectively. While its success is often attributed to structure in the data, modern neural networks also memorize specific facts, but if and how such memorized information is can transferred in distillation settings remains less understood. In this work, we show that students trained on soft labels from teachers can achieve non-trivial accuracy on held-out memorized data they never directly observed. This effect persists on structured data when the teacher has notthis http URLanalyze it in isolation, we consider finite random i.i.d. datasets where generalization is a priori impossible and a successful teacher fit implies pure memorization. Still, students can learn non-trivial information about the held-out data, in some cases up to perfect accuracy. In those settings, enough soft labels are available to recover the teacher functionally - the student matches the teacher's predictions on all possible inputs, including the held-out memorized data. We show that these phenomena strongly depend on the temperature with which the logits are smoothed, but persist across varying network capacities, architectures and dataset compositions.
View on arXiv@article{behrens2025_2506.14457, title={ Dataset distillation for memorized data: Soft labels can leak held-out teacher knowledge }, author={ Freya Behrens and Lenka Zdeborová }, journal={arXiv preprint arXiv:2506.14457}, year={ 2025 } }