29
0

Leveraging Large Language Models for Sarcastic Speech Annotation in Sarcasm Detection

Main:4 Pages
3 Figures
Bibliography:1 Pages
3 Tables
Abstract

Sarcasm fundamentally alters meaning through tone and context, yet detecting it in speech remains a challenge due to data scarcity. In addition, existing detection systems often rely on multimodal data, limiting their applicability in contexts where only speech is available. To address this, we propose an annotation pipeline that leverages large language models (LLMs) to generate a sarcasm dataset. Using a publicly available sarcasm-focused podcast, we employ GPT-4o and LLaMA 3 for initial sarcasm annotations, followed by human verification to resolve disagreements. We validate this approach by comparing annotation quality and detection performance on a publicly available sarcasm dataset using a collaborative gating architecture. Finally, we introduce PodSarc, a large-scale sarcastic speech dataset created through this pipeline. The detection model achieves a 73.63% F1 score, demonstrating the dataset's potential as a benchmark for sarcasm detection research.

View on arXiv
@article{li2025_2506.00955,
  title={ Leveraging Large Language Models for Sarcastic Speech Annotation in Sarcasm Detection },
  author={ Zhu Li and Yuqing Zhang and Xiyuan Gao and Shekhar Nayak and Matt Coler },
  journal={arXiv preprint arXiv:2506.00955},
  year={ 2025 }
}
Comments on this paper