ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.06129
61
5

LEyes: A Lightweight Framework for Deep Learning-Based Eye Tracking using Synthetic Eye Images

12 September 2023
Sean Anthony Byrne
Virmarie Maquiling
Marcus Nyström
Enkelejda Kasneci
D. Niehorster
ArXivPDFHTML
Abstract

Deep learning has bolstered gaze estimation techniques, but real-world deployment has been impeded by inadequate training datasets. This problem is exacerbated by both hardware-induced variations in eye images and inherent biological differences across the recorded participants, leading to both feature and pixel-level variance that hinders the generalizability of models trained on specific datasets. While synthetic datasets can be a solution, their creation is both time and resource-intensive. To address this problem, we present a framework called Light Eyes or "LEyes" which, unlike conventional photorealistic methods, only models key image features required for video-based eye tracking using simple light distributions. LEyes facilitates easy configuration for training neural networks across diverse gaze-estimation tasks. We demonstrate that models trained using LEyes are consistently on-par or outperform other state-of-the-art algorithms in terms of pupil and CR localization across well-known datasets. In addition, a LEyes trained model outperforms the industry standard eye tracker using significantly more cost-effective hardware. Going forward, we are confident that LEyes will revolutionize synthetic data generation for gaze estimation models, and lead to significant improvements of the next generation video-based eye trackers.

View on arXiv
@article{byrne2025_2309.06129,
  title={ LEyes: A Lightweight Framework for Deep Learning-Based Eye Tracking using Synthetic Eye Images },
  author={ Sean Anthony Byrne and Virmarie Maquiling and Marcus Nyström and Enkelejda Kasneci and Diederick C. Niehorster },
  journal={arXiv preprint arXiv:2309.06129},
  year={ 2025 }
}
Comments on this paper