23
0

Learning Advanced Self-Attention for Linear Transformers in the Singular Value Domain

Abstract

Transformers have demonstrated remarkable performance across diverse domains. The key component of Transformers is self-attention, which learns the relationship between any two tokens in the input sequence. Recent studies have revealed that the self-attention can be understood as a normalized adjacency matrix of a graph. Notably, from the perspective of graph signal processing (GSP), the self-attention can be equivalently defined as a simple graph filter, applying GSP using the value vector as the signal. However, the self-attention is a graph filter defined with only the first order of the polynomial matrix, and acts as a low-pass filter preventing the effective leverage of various frequency information. Consequently, existing self-attention mechanisms are designed in a rather simplified manner. Therefore, we propose a novel method, called \underline{\textbf{A}}ttentive \underline{\textbf{G}}raph \underline{\textbf{F}}ilter (AGF), interpreting the self-attention as learning the graph filter in the singular value domain from the perspective of graph signal processing for directed graphs with the linear complexity w.r.t. the input length nn, i.e., O(nd2)\mathcal{O}(nd^2). In our experiments, we demonstrate that AGF achieves state-of-the-art performance on various tasks, including Long Range Arena benchmark and time series classification.

View on arXiv
@article{wi2025_2505.08516,
  title={ Learning Advanced Self-Attention for Linear Transformers in the Singular Value Domain },
  author={ Hyowon Wi and Jeongwhan Choi and Noseong Park },
  journal={arXiv preprint arXiv:2505.08516},
  year={ 2025 }
}
Comments on this paper