Neural ODE Transformers: Analyzing Internal Dynamics and Adaptive Fine-tuning

Recent advancements in large language models (LLMs) based on transformer architectures have sparked significant interest in understanding their inner workings. In this paper, we introduce a novel approach to modeling transformer architectures using highly flexible non-autonomous neural ordinary differential equations (ODEs). Our proposed model parameterizes all weights of attention and feed-forward blocks through neural networks, expressing these weights as functions of a continuous layer index. Through spectral analysis of the model's dynamics, we uncover an increase in eigenvalue magnitude that challenges the weight-sharing assumption prevalent in existing theoretical studies. We also leverage the Lyapunov exponent to examine token-level sensitivity, enhancing model interpretability. Our neural ODE transformer demonstrates performance comparable to or better than vanilla transformers across various configurations and datasets, while offering flexible fine-tuning capabilities that can adapt to different architectural constraints.
View on arXiv@article{tong2025_2503.01329, title={ Neural ODE Transformers: Analyzing Internal Dynamics and Adaptive Fine-tuning }, author={ Anh Tong and Thanh Nguyen-Tang and Dongeun Lee and Duc Nguyen and Toan Tran and David Hall and Cheongwoong Kang and Jaesik Choi }, journal={arXiv preprint arXiv:2503.01329}, year={ 2025 } }