ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.18325
65
0

Depth3DLane: Monocular 3D Lane Detection via Depth Prior Distillation

25 April 2025
Dongxin Lyu
Han Huang
Cheng Tan
Zimu Li
    MDE
ArXivPDFHTML
Abstract

Monocular 3D lane detection is challenging due to the difficulty in capturing depth information from single-camera images. A common strategy involves transforming front-view (FV) images into bird's-eye-view (BEV) space through inverse perspective mapping (IPM), facilitating lane detection using BEV features. However, IPM's flat-ground assumption and loss of contextual information lead to inaccuracies in reconstructing 3D information, especially height. In this paper, we introduce a BEV-based framework to address these limitations and improve 3D lane detection accuracy. Our approach incorporates a Hierarchical Depth-Aware Head that provides multi-scale depth features, mitigating the flat-ground assumption by enhancing spatial awareness across varying depths. Additionally, we leverage Depth Prior Distillation to transfer semantic depth knowledge from a teacher model, capturing richer structural and contextual information for complex lane structures. To further refine lane continuity and ensure smooth lane reconstruction, we introduce a Conditional Random Field module that enforces spatial coherence in lane predictions. Extensive experiments validate that our method achieves state-of-the-art performance in terms of z-axis error and outperforms other methods in the field in overall performance. The code is released at:this https URL.

View on arXiv
@article{lyu2025_2504.18325,
  title={ Depth3DLane: Monocular 3D Lane Detection via Depth Prior Distillation },
  author={ Dongxin Lyu and Han Huang and Cheng Tan and Zimu Li },
  journal={arXiv preprint arXiv:2504.18325},
  year={ 2025 }
}
Comments on this paper