62
0
v1v2 (latest)

ALAS: Measuring Latent Speech-Text Alignment For Spoken Language Understanding In Multimodal LLMs

Main:4 Pages
4 Figures
Bibliography:2 Pages
Appendix:1 Pages
Abstract

Large Language Models (LLMs) are increasingly used in Spoken Language Understanding (SLU), where effective multimodal learning depends on the alignment between audio and text. Despite various fusion methods, no standard metric exists to assess this alignment. This work introduces ALAS (Automatic Latent Alignment Score), a metric that evaluates alignment by measuring correlations between audio and text representations across transformer layers. Experiments on Spoken Question Answering and Emotion Recognition show that ALAS captures meaningful patterns across tasks and layers.

View on arXiv
@article{mousavi2025_2505.19937,
  title={ ALAS: Measuring Latent Speech-Text Alignment For Spoken Language Understanding In Multimodal LLMs },
  author={ Pooneh Mousavi and Yingzhi Wang and Mirco Ravanelli and Cem Subakan },
  journal={arXiv preprint arXiv:2505.19937},
  year={ 2025 }
}
Comments on this paper