120

HearSay Benchmark: Do Audio LLMs Leak What They Hear?

Jin Wang
Liang Lin
Kaiwen Luo
Weiliu Wang
Yitian Chen
Moayad Aloqaily
Xuehai Tang
Zhenhong Zhou
Kun Wang
Li Sun
Qingsong Wen
Main:7 Pages
6 Figures
Bibliography:4 Pages
2 Tables
Appendix:9 Pages
Abstract

While Audio Large Language Models (ALLMs) have achieved remarkable progress in understanding and generation, their potential privacy implications remain largely unexplored. This paper takes the first step to investigate whether ALLMs inadvertently leak user privacy solely through acoustic voiceprints and introduces HearSay\textit{HearSay}, a comprehensive benchmark constructed from over 22,000 real-world audio clips. To ensure data quality, the benchmark is meticulously curated through a rigorous pipeline involving automated profiling and human verification, guaranteeing that all privacy labels are grounded in factual records. Extensive experiments on HearSay\textit{HearSay} yield three critical findings: Significant Privacy Leakage\textbf{Significant Privacy Leakage}: ALLMs inherently extract private attributes from voiceprints, reaching 92.89% accuracy on gender and effectively profiling social attributes. Insufficient Safety Mechanisms\textbf{Insufficient Safety Mechanisms}: Alarmingly, existing safeguards are severely inadequate; most models fail to refuse privacy-intruding requests, exhibiting near-zero refusal rates for physiological traits. Reasoning Amplifies Risk\textbf{Reasoning Amplifies Risk}: Chain-of-Thought (CoT) reasoning exacerbates privacy risks in capable models by uncovering deeper acoustic correlations. These findings expose critical vulnerabilities in ALLMs, underscoring the urgent need for targeted privacy alignment. The codes and dataset are available atthis https URL

View on arXiv
Comments on this paper