All Papers
Title |
|---|
Title |
|---|

Traditional Transformers face a major bottleneck in long-sequence time series forecasting due to their quadratic complexity and their limited ability to effectively exploit frequency-domain information. Inspired by RWKV's linear attention and frequency-domain modeling, we propose FRWKV, a frequency-domain linear-attention framework that overcomes these limitations. Our model integrates linear attention mechanisms with frequency-domain analysis, achieving computational complexity in the attention path while exploiting spectral information to enhance temporal feature representations for scalable long-sequence modeling. Across eight real-world datasets, FRWKV achieves a first-place average rank. Our ablation studies confirm the critical roles of both the linear attention and frequency-encoder components. This work demonstrates the powerful synergy between linear attention and frequency analysis, establishing a new paradigm for scalable time series modeling. Code is available at this repository:this https URL.
View on arXiv