ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.17054
28
0

DeltaFlow: An Efficient Multi-frame Scene Flow Estimation Method

23 August 2025
Qingwen Zhang
Xiaomeng Zhu
Yushan Zhang
Yixi Cai
Olov Andersson
Patric Jensfelt
ArXiv (abs)PDFHTMLGithub (4★)
Main:9 Pages
13 Figures
Bibliography:2 Pages
8 Tables
Appendix:6 Pages
Abstract

Previous dominant methods for scene flow estimation focus mainly on input from two consecutive frames, neglecting valuable information in the temporal domain. While recent trends shift towards multi-frame reasoning, they suffer from rapidly escalating computational costs as the number of frames grows. To leverage temporal information more efficiently, we propose DeltaFlow (Δ\DeltaΔFlow), a lightweight 3D framework that captures motion cues via a Δ\DeltaΔ scheme, extracting temporal features with minimal computational cost, regardless of the number of frames. Additionally, scene flow estimation faces challenges such as imbalanced object class distributions and motion inconsistency. To tackle these issues, we introduce a Category-Balanced Loss to enhance learning across underrepresented classes and an Instance Consistency Loss to enforce coherent object motion, improving flow accuracy. Extensive evaluations on the Argoverse 2 and Waymo datasets show that Δ\DeltaΔFlow achieves state-of-the-art performance with up to 22% lower error and 2×2\times2× faster inference compared to the next-best multi-frame supervised method, while also demonstrating a strong cross-domain generalization ability. The code is open-sourced atthis https URLalong with trained model weights.

View on arXiv
Comments on this paper