Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.15099
Cited By
Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator
24 May 2023
Ziwei He
Meng-Da Yang
Minwei Feng
Jingcheng Yin
Xiang Wang
Jingwen Leng
Zhouhan Lin
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator"
12 / 12 papers shown
Title
FreqKV: Frequency Domain Key-Value Compression for Efficient Context Window Extension
Jushi Kai
Boyi Zeng
Yixuan Wang
Haoli Bai
Bo Jiang
Bo Jiang
Zhouhan Lin
44
0
0
01 May 2025
Event USKT : U-State Space Model in Knowledge Transfer for Event Cameras
Yuhui Lin
Jiahao Zhang
Siyuan Li
Jimin Xiao
Ding Xu
Wenjun Wu
Jiaxuan Lu
74
0
0
22 Nov 2024
SUBLLM: A Novel Efficient Architecture with Token Sequence Subsampling for LLM
Quandong Wang
Yuxuan Yuan
Xiaoyu Yang
Ruike Zhang
Kang Zhao
Wei Liu
Jian Luan
Daniel Povey
Bin Wang
53
0
0
03 Jun 2024
Fourier Controller Networks for Real-Time Decision-Making in Embodied Learning
Hengkai Tan
Songming Liu
Kai Ma
Chengyang Ying
Xingxing Zhang
Hang Su
Jun Zhu
42
2
0
30 May 2024
Accelerating Transformers with Spectrum-Preserving Token Merging
Hoai-Chau Tran
D. M. Nguyen
Duy M. Nguyen
Trung Thanh Nguyen
Ngan Le
Pengtao Xie
Daniel Sonntag
James Y. Zou
Binh T. Nguyen
Mathias Niepert
46
8
0
25 May 2024
EmMixformer: Mix transformer for eye movement recognition
Huafeng Qin
Hongyu Zhu
Xin Jin
Qun Song
M. El-Yacoubi
Xinbo Gao
41
7
0
10 Jan 2024
SAM-PARSER: Fine-tuning SAM Efficiently by Parameter Space Reconstruction
Zelin Peng
Zhengqin Xu
Zhilin Zeng
Xiaokang Yang
Wei-Ming Shen
40
20
0
28 Aug 2023
An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP Tasks
Yuxiang Wu
Yu Zhao
Baotian Hu
Pasquale Minervini
Pontus Stenetorp
Sebastian Riedel
RALM
KELM
51
43
0
30 Oct 2022
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
282
2,007
0
31 Dec 2020
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
288
2,023
0
28 Jul 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
255
580
0
12 Mar 2020
Teaching Machines to Read and Comprehend
Karl Moritz Hermann
Tomás Kociský
Edward Grefenstette
L. Espeholt
W. Kay
Mustafa Suleyman
Phil Blunsom
211
3,515
0
10 Jun 2015
1