Understanding how Transformers work and how they process information is key to the theoretical and empirical advancement of these machines. In this work, we demonstrate the existence of two phenomena in Transformers, namely isolation and continuity. Both of these phenomena hinder Transformers to learn even simple pattern sequences. Isolation expresses that any learnable sequence must be isolated from another learnable sequence, and hence some sequences cannot be learned by a single Transformer at the same time. Continuity entails that an attractor basin forms around a learned sequence, such that any sequence falling in that basin will collapse towards the learned sequence. Here, we mathematically prove these phenomena emerge in all Transformers that use compact positional encoding, and design rigorous experiments, demonstrating that the theoretical limitations we shed light on occur on the practical scale.
View on arXiv@article{pasten2025_2505.10606, title={ Continuity and Isolation Lead to Doubts or Dilemmas in Large Language Models }, author={ Hector Pasten and Felipe Urrutia and Hector Jimenez and Cristian B. Calderon and Cristóbal Rojas and Alexander Kozachinskiy }, journal={arXiv preprint arXiv:2505.10606}, year={ 2025 } }