ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13847
83
0
v1v2 (latest)

Forensic deepfake audio detection using segmental speech features

20 May 2025
Tianle Yang
Chengzhe Sun
Siwei Lyu
Phil Rose
ArXiv (abs)PDFHTML
Main:6 Pages
3 Figures
Bibliography:1 Pages
2 Tables
Abstract

This study explores the potential of using acoustic features of segmental speech sounds to detect deepfake audio. These features are highly interpretable because of their close relationship with human articulatory processes and are expected to be more difficult for deepfake models to replicate. The results demonstrate that certain segmental features commonly used in forensic voice comparison are effective in identifying deep-fakes, whereas some global features provide little value. These findings underscore the need to approach audio deepfake detection differently for forensic voice comparison and offer a new perspective on leveraging segmental features for this purpose.

View on arXiv
@article{yang2025_2505.13847,
  title={ Forensic deepfake audio detection using segmental speech features },
  author={ Tianle Yang and Chengzhe Sun and Siwei Lyu and Phil Rose },
  journal={arXiv preprint arXiv:2505.13847},
  year={ 2025 }
}
Comments on this paper