2

Putting a Face to Forgetting: Continual Learning meets Mechanistic Interpretability

Sergi Masip
Gido M. van de Ven
Javier Ferrando
Tinne Tuytelaars
Main:7 Pages
13 Figures
Bibliography:3 Pages
Appendix:14 Pages
Abstract

Catastrophic forgetting in continual learning is often measured at the performance or last-layer representation level, overlooking the underlying mechanisms. We introduce a mechanistic framework that offers a geometric interpretation of catastrophic forgetting as the result of transformations to the encoding of individual features. These transformations can lead to forgetting by reducing the allocated capacity of features (worse representation) and disrupting their readout by downstream computations. Analysis of a tractable model formalizes this view, allowing us to identify best- and worst-case scenarios. Through experiments on this model, we empirically test our formal analysis and highlight the detrimental effect of depth. Finally, we demonstrate how our framework can be used in the analysis of practical models through the use of Crosscoders. We present a case study of a Vision Transformer trained on sequential CIFAR-10. Our work provides a new, feature-centric vocabulary for continual learning.

View on arXiv
Comments on this paper