ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.00848
42
1

Zero-Shot 4D Lidar Panoptic Segmentation

1 April 2025
Yushan Zhang
Aljosa Osep
Laura Leal-Taixé
Tim Meinhardt
    3DPC
ArXivPDFHTML
Abstract

Zero-shot 4D segmentation and recognition of arbitrary objects in Lidar is crucial for embodied navigation, with applications ranging from streaming perception to semantic mapping and localization. However, the primary challenge in advancing research and developing generalized, versatile methods for spatio-temporal scene understanding in Lidar lies in the scarcity of datasets that provide the necessary diversity and scale ofthis http URLovercome these challenges, we propose SAL-4D (Segment Anything in Lidar--4D), a method that utilizes multi-modal robotic sensor setups as a bridge to distill recent developments in Video Object Segmentation (VOS) in conjunction with off-the-shelf Vision-Language foundation models to Lidar. We utilize VOS models to pseudo-label tracklets in short video sequences, annotate these tracklets with sequence-level CLIP tokens, and lift them to the 4D Lidar space using calibrated multi-modal sensory setups to distill them to our SAL-4D model. Due to temporal consistent predictions, we outperform prior art in 3D Zero-Shot Lidar Panoptic Segmentation (LPS) over 555 PQ, and unlock Zero-Shot 4D-LPS.

View on arXiv
@article{zhang2025_2504.00848,
  title={ Zero-Shot 4D Lidar Panoptic Segmentation },
  author={ Yushan Zhang and Aljoša Ošep and Laura Leal-Taixé and Tim Meinhardt },
  journal={arXiv preprint arXiv:2504.00848},
  year={ 2025 }
}
Comments on this paper