27
0
v1v2 (latest)

Reading Recognition in the Wild

Main:9 Pages
35 Figures
Bibliography:3 Pages
17 Tables
Appendix:21 Pages
Abstract

To enable egocentric contextual AI in always-on smart glasses, it is crucial to be able to keep a record of the user's interactions with the world, including during reading. In this paper, we introduce a new task of reading recognition to determine when the user is reading. We first introduce the first-of-its-kind large-scale multimodal Reading in the Wild dataset, containing 100 hours of reading and non-reading videos in diverse and realistic scenarios. We then identify three modalities (egocentric RGB, eye gaze, head pose) that can be used to solve the task, and present a flexible transformer model that performs the task using these modalities, either individually or combined. We show that these modalities are relevant and complementary to the task, and investigate how to efficiently and effectively encode each modality. Additionally, we show the usefulness of this dataset towards classifying types of reading, extending current reading understanding studies conducted in constrained settings to larger scale, diversity and realism.

View on arXiv
@article{yang2025_2505.24848,
  title={ Reading Recognition in the Wild },
  author={ Charig Yang and Samiul Alam and Shakhrul Iman Siam and Michael J. Proulx and Lambert Mathias and Kiran Somasundaram and Luis Pesqueira and James Fort and Sheroze Sheriffdeen and Omkar Parkhi and Carl Ren and Mi Zhang and Yuning Chai and Richard Newcombe and Hyo Jin Kim },
  journal={arXiv preprint arXiv:2505.24848},
  year={ 2025 }
}
Comments on this paper