ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.03933
28
0

Language Models Are Implicitly Continuous

4 April 2025
Samuele Marro
Davide Evangelista
X. A. Huang
Emanuele La Malfa
M. Lombardi
Michael Wooldridge
ArXivPDFHTML
Abstract

Language is typically modelled with discrete sequences. However, the most successful approaches to language modelling, namely neural networks, are continuous and smooth function approximators. In this work, we show that Transformer-based language models implicitly learn to represent sentences as continuous-time functions defined over a continuous input space. This phenomenon occurs in most state-of-the-art Large Language Models (LLMs), including Llama2, Llama3, Phi3, Gemma, Gemma2, and Mistral, and suggests that LLMs reason about language in ways that fundamentally differ from humans. Our work formally extends Transformers to capture the nuances of time and space continuity in both input and output space. Our results challenge the traditional interpretation of how LLMs understand language, with several linguistic and engineering implications.

View on arXiv
@article{marro2025_2504.03933,
  title={ Language Models Are Implicitly Continuous },
  author={ Samuele Marro and Davide Evangelista and X. Angelo Huang and Emanuele La Malfa and Michele Lombardi and Michael Wooldridge },
  journal={arXiv preprint arXiv:2504.03933},
  year={ 2025 }
}
Comments on this paper