ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10671
7
0

GA3CE: Unconstrained 3D Gaze Estimation with Gaze-Aware 3D Context Encoding

15 May 2025
Yuki Kawana
Shintaro Shiba
Quan Kong
Norimasa Kobori
ArXivPDFHTML
Abstract

We propose a novel 3D gaze estimation approach that learns spatial relationships between the subject and objects in the scene, and outputs 3D gaze direction. Our method targets unconstrained settings, including cases where close-up views of the subject's eyes are unavailable, such as when the subject is distant or facing away. Previous approaches typically rely on either 2D appearance alone or incorporate limited spatial cues using depth maps in the non-learnable post-processing step. Estimating 3D gaze direction from 2D observations in these scenarios is challenging; variations in subject pose, scene layout, and gaze direction, combined with differing camera poses, yield diverse 2D appearances and 3D gaze directions even when targeting the same 3D scene. To address this issue, we propose GA3CE: Gaze-Aware 3D Context Encoding. Our method represents subject and scene using 3D poses and object positions, treating them as 3D context to learn spatial relationships in 3D space. Inspired by human vision, we align this context in an egocentric space, significantly reducing spatial complexity. Furthermore, we propose D3^33 (direction-distance-decomposed) positional encoding to better capture the spatial relationship between 3D context and gaze direction in direction and distance space. Experiments demonstrate substantial improvements, reducing mean angle error by 13%-37% compared to leading baselines on benchmark datasets in single-frame settings.

View on arXiv
@article{kawana2025_2505.10671,
  title={ GA3CE: Unconstrained 3D Gaze Estimation with Gaze-Aware 3D Context Encoding },
  author={ Yuki Kawana and Shintaro Shiba and Quan Kong and Norimasa Kobori },
  journal={arXiv preprint arXiv:2505.10671},
  year={ 2025 }
}
Comments on this paper