14
35

Randomized and Deterministic Attention Sparsification Algorithms for Over-parameterized Feature Dimension

Abstract

Large language models (LLMs) have shown their power in different areas. Attention computation, as an important subroutine of LLMs, has also attracted interests in theory. Recently the static computation and dynamic maintenance of attention matrix has been studied by [Alman and Song 2023] and [Brand, Song and Zhou 2023] from both algorithmic perspective and hardness perspective. In this work, we consider the sparsification of the attention problem. We make one simplification which is the logit matrix is symmetric. Let nn denote the length of sentence, let dd denote the embedding dimension. Given a matrix XRn×dX \in \mathbb{R}^{n \times d}, suppose dnd \gg n and XX<r\| X X^\top \|_{\infty} < r with r(0,0.1)r \in (0,0.1), then we aim for finding YRn×mY \in \mathbb{R}^{n \times m} (where mdm\ll d) such that \begin{align*} \| D(Y)^{-1} \exp( Y Y^\top ) - D(X)^{-1} \exp( X X^\top) \|_{\infty} \leq O(r) \end{align*} We provide two results for this problem. \bullet Our first result is a randomized algorithm. It runs in O~(nnz(X)+nω)\widetilde{O}(\mathrm{nnz}(X) + n^{\omega} ) time, has 1δ1-\delta succeed probability, and chooses m=O(nlog(n/δ))m = O(n \log(n/\delta)). Here nnz(X)\mathrm{nnz}(X) denotes the number of non-zero entries in XX. We use ω\omega to denote the exponent of matrix multiplication. Currently ω2.373\omega \approx 2.373. \bullet Our second result is a deterministic algorithm. It runs in O~(min{i[d]nnz(Xi)2,dnω1}+nω+1)\widetilde{O}(\min\{\sum_{i\in[d]}\mathrm{nnz}(X_i)^2, dn^{\omega-1}\} + n^{\omega+1}) time and chooses m=O(n)m = O(n). Here XiX_i denote the ii-th column of matrix XX. Our main findings have the following implication for applied LLMs task: for any super large feature dimension, we can reduce it down to the size nearly linear in length of sentence.

View on arXiv
Comments on this paper