37
2

Neuroplasticity and Corruption in Model Mechanisms: A Case Study Of Indirect Object Identification

Abstract

Previous research has shown that fine-tuning language models on general tasks enhance their underlying mechanisms. However, the impact of fine-tuning on poisoned data and the resulting changes in these mechanisms are poorly understood. This study investigates the changes in a model's mechanisms during toxic fine-tuning and identifies the primary corruption mechanisms. We also analyze the changes after retraining a corrupted model on the original dataset and observe neuroplasticity behaviors, where the model relearns original mechanisms after fine-tuning the corrupted model. Our findings indicate that: (i) Underlying mechanisms are amplified across task-specific fine-tuning which can be generalized to longer epochs, (ii) Model corruption via toxic fine-tuning is localized to specific circuit components, (iii) Models exhibit neuroplasticity when retraining corrupted models on clean dataset, reforming the original model mechanisms.

View on arXiv
@article{chhabra2025_2503.01896,
  title={ Neuroplasticity and Corruption in Model Mechanisms: A Case Study Of Indirect Object Identification },
  author={ Vishnu Kabir Chhabra and Ding Zhu and Mohammad Mahdi Khalili },
  journal={arXiv preprint arXiv:2503.01896},
  year={ 2025 }
}
Comments on this paper