ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18200
46
0

CrossRF: A Domain-Invariant Deep Learning Approach for RF Fingerprinting

21 May 2025
Fahrettin Emin Tiras
Hayriye Serra Altinoluk
ArXiv (abs)PDFHTML
Main:7 Pages
6 Figures
Bibliography:1 Pages
3 Tables
Appendix:1 Pages
Abstract

Radio Frequency (RF) fingerprinting offers a promising approach for drone identification and security, although it suffers from significant performance degradation when operating on different transmission channels. This paper presents CrossRF, a domain-invariant deep learning approach that addresses the problem of cross-channel RF fingerprinting for Unmanned Aerial Vehicle (UAV) identification. Our approach aims to minimize the domain gap between different RF channels by using adversarial learning to train a more robust model that maintains consistent identification performance despite channel variations. We validate our approach using the UAVSig dataset, comprising real-world over-the-air RF signals from identical drone models operating across several frequency channels, ensuring that the findings correspond to real-world scenarios. The experimental results show CrossRF's efficiency, achieving up to 99.03% accuracy when adapting from Channel 3 to Channel 4, compared to only 26.39% using conventional methods. The model maintains robust performance in more difficult multi-channel scenarios (87.57% accuracy adapting from Channels 1,3 to 2,4) and achieves 89.45% accuracy with 0.9 precision for controller classification. These results confirm CrossRF's ability to significantly reduce performance degradation due to cross-channel variations while maintaining high identification accuracy with minimal training data requirements, making it particularly suitable for practical drone security applications.

View on arXiv
Comments on this paper