Have Multimodal Large Language Models (MLLMs) Really Learned to Tell the Time on Analog Clocks?

Abstract
Multimodal Large Language Models which can answer complex questions on an image struggle to tell the time on analog clocks. This is probably due to the lack of images with clocks at different times in their training set. In this work we explore this issue with one of the latest MLLMs: GPT-4.1 to understand why MLLMs fail to tell the time and whether fine-tuning can solve the problem. The results show how models are making progress in reading the time on analog clocks. But have they really learned to do it, or have they only learned patterns in their training datasets? In this work we put the models to the test with different clocks to illustrate the limitations of MLLMs to abstract and generalize.
View on arXiv@article{fu2025_2505.10862, title={ Have Multimodal Large Language Models (MLLMs) Really Learned to Tell the Time on Analog Clocks? }, author={ Tairan Fu and Miguel González and Javier Conde and Elena Merino-Gómez and Pedro Reviriego }, journal={arXiv preprint arXiv:2505.10862}, year={ 2025 } }
Comments on this paper