17
0

Time-Frequency-Based Attention Cache Memory Model for Real-Time Speech Separation

Abstract

Existing causal speech separation models often underperform compared to non-causal models due to difficulties in retaining historical information. To address this, we propose the Time-Frequency Attention Cache Memory (TFACM) model, which effectively captures spatio-temporal relationships through an attention mechanism and cache memory (CM) for historical information storage. In TFACM, an LSTM layer captures frequency-relative positions, while causal modeling is applied to the time dimension using local and global representations. The CM module stores past information, and the causal attention refinement (CAR) module further enhances time-based feature representations for finer granularity. Experimental results showed that TFACM achieveed comparable performance to the SOTA TF-GridNet-Causal model, with significantly lower complexity and fewer trainable parameters. For more details, visit the project page:this https URL.

View on arXiv
@article{chen2025_2505.13094,
  title={ Time-Frequency-Based Attention Cache Memory Model for Real-Time Speech Separation },
  author={ Guo Chen and Kai Li and Runxuan Yang and Xiaolin Hu },
  journal={arXiv preprint arXiv:2505.13094},
  year={ 2025 }
}
Comments on this paper