Distilled Circuits: A Mechanistic Study of Internal Restructuring in Knowledge Distillation

Knowledge distillation compresses a larger neural model (teacher) into smaller, faster student models by training them to match teacher outputs. However, the internal computational transformations that occur during this process remain poorly understood. We apply techniques from mechanistic interpretability to analyze how internal circuits, representations, and activation patterns differ between teacher and student. Focusing on GPT2-small and its distilled counterpart DistilGPT2, we find that student models reorganize, compress, and discard teacher components, often resulting in stronger reliance on fewer individual components. To quantify functional alignment beyond output similarity, we introduce an alignment metric based on influence-weighted component similarity, validated across multiple tasks. Our findings reveal that while knowledge distillation preserves broad functional behaviors, it also causes significant shifts in internal computation, with important implications for the robustness and generalization capacity of distilled models.
View on arXiv@article{haskins2025_2505.10822, title={ Distilled Circuits: A Mechanistic Study of Internal Restructuring in Knowledge Distillation }, author={ Reilly Haskins and Benjamin Adams }, journal={arXiv preprint arXiv:2505.10822}, year={ 2025 } }