In this work, we observe an interesting phenomenon: it is possible to generate reversible sentence embeddings that allow an LLM to reconstruct the original text exactly, without modifying the model's weights. This is achieved by introducing a special memory token, whose embedding is optimized through training on a fixed sequence. When prompted with this embedding, the model reconstructs the fixed sequence exactly. We evaluate this phenomenon across English and Spanish datasets, sequences of up to approximately 240 tokens, and model scales ranging from 100M to 8B parameters. Notably, Llama 3.1 8B successfully reconstructs all tested sequences. Our findings highlight an interesting capability of LLMs and suggest potential applications in memory-based retrieval, compression, and controlled text generation.
View on arXiv@article{sastre2025_2506.15001, title={ Memory Tokens: Large Language Models Can Generate Reversible Sentence Embeddings }, author={ Ignacio Sastre and Aiala Rosá }, journal={arXiv preprint arXiv:2506.15001}, year={ 2025 } }