ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09951
50
0

Target-aware Bidirectional Fusion Transformer for Aerial Object Tracking

13 March 2025
Xinglong Sun
Haijiang Sun
Shan Jiang
Jiacheng Wang
Jiasong Wang
    ViT
ArXivPDFHTML
Abstract

The trackers based on lightweight neural networks have achieved great success in the field of aerial remote sensing, most of which aggregate multi-stage deep features to lift the tracking quality. However, existing algorithms usually only generate single-stage fusion features for state decision, which ignore that diverse kinds of features are required for identifying and locating the object, limiting the robustness and precision of tracking. In this paper, we propose a novel target-aware Bidirectional Fusion transformer (BFTrans) for UAV tracking. Specifically, we first present a two-stream fusion network based on linear self and cross attentions, which can combine the shallow and the deep features from both forward and backward directions, providing the adjusted local details for location and global semantics for recognition. Besides, a target-aware positional encoding strategy is designed for the above fusion model, which is helpful to perceive the object-related attributes during the fusion phase. Finally, the proposed method is evaluated on several popular UAV benchmarks, including UAV-123, UAV20L and UAVTrack112. Massive experimental results demonstrate that our approach can exceed other state-of-the-art trackers and run with an average speed of 30.5 FPS on embedded platform, which is appropriate for practical drone deployments.

View on arXiv
@article{sun2025_2503.09951,
  title={ Target-aware Bidirectional Fusion Transformer for Aerial Object Tracking },
  author={ Xinglong Sun and Haijiang Sun and Shan Jiang and Jiacheng Wang and Jiasong Wang },
  journal={arXiv preprint arXiv:2503.09951},
  year={ 2025 }
}
Comments on this paper