ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17091
64
0

Large Language Models Implicitly Learn to See and Hear Just By Reading

20 May 2025
Prateek Verma
Mert Pilanci
ArXivPDFHTML
Abstract

This paper presents a fascinating find: By training an auto-regressive LLM model on text tokens, the text model inherently develops internally an ability to understand images and audio, thereby developing the ability to see and hear just by reading. Popular audio and visual LLM models fine-tune text LLM models to give text output conditioned on images and audio embeddings. On the other hand, our architecture takes in patches of images, audio waveforms or tokens as input. It gives us the embeddings or category labels typical of a classification pipeline. We show the generality of text weights in aiding audio classification for datasets FSD-50K and GTZAN. Further, we show this working for image classification on CIFAR-10 and Fashion-MNIST, as well on image patches. This pushes the notion of text-LLMs learning powerful internal circuits that can be utilized by activating necessary connections for various applications rather than training models from scratch every single time.

View on arXiv
@article{verma2025_2505.17091,
  title={ Large Language Models Implicitly Learn to See and Hear Just By Reading },
  author={ Prateek Verma and Mert Pilanci },
  journal={arXiv preprint arXiv:2505.17091},
  year={ 2025 }
}
Comments on this paper