ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.03824
  4. Cited By
FNet: Mixing Tokens with Fourier Transforms

FNet: Mixing Tokens with Fourier Transforms

9 May 2021
James Lee-Thorp
Joshua Ainslie
Ilya Eckstein
Santiago Ontanon
ArXivPDFHTML

Papers citing "FNet: Mixing Tokens with Fourier Transforms"

35 / 85 papers shown
Title
Digital Asset Valuation: A Study on Domain Names, Email Addresses, and
  NFTs
Digital Asset Valuation: A Study on Domain Names, Email Addresses, and NFTs
Kai Sun
6
2
0
06 Oct 2022
WavSpA: Wavelet Space Attention for Boosting Transformers' Long Sequence
  Learning Ability
WavSpA: Wavelet Space Attention for Boosting Transformers' Long Sequence Learning Ability
Yufan Zhuang
Zihan Wang
Fangbo Tao
Jingbo Shang
ViT
AI4TS
32
3
0
05 Oct 2022
Liquid Structural State-Space Models
Liquid Structural State-Space Models
Ramin Hasani
Mathias Lechner
Tsun-Hsuan Wang
Makram Chahine
Alexander Amini
Daniela Rus
AI4TS
101
95
0
26 Sep 2022
Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and
  Algorithm Co-design
Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design
Hongxiang Fan
Thomas C. P. Chau
Stylianos I. Venieris
Royson Lee
Alexandros Kouris
Wayne Luk
Nicholas D. Lane
Mohamed S. Abdelfattah
34
56
0
20 Sep 2022
Public Wisdom Matters! Discourse-Aware Hyperbolic Fourier Co-Attention
  for Social-Text Classification
Public Wisdom Matters! Discourse-Aware Hyperbolic Fourier Co-Attention for Social-Text Classification
Karish Grover
S. Angara
Md. Shad Akhtar
Tanmoy Chakraborty
11
12
0
15 Sep 2022
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
28
109
0
31 Aug 2022
MRL: Learning to Mix with Attention and Convolutions
MRL: Learning to Mix with Attention and Convolutions
Shlok Mohta
Hisahiro Suganuma
Yoshiki Tanaka
22
2
0
30 Aug 2022
CoBEVT: Cooperative Bird's Eye View Semantic Segmentation with Sparse
  Transformers
CoBEVT: Cooperative Bird's Eye View Semantic Segmentation with Sparse Transformers
Runsheng Xu
Zhengzhong Tu
Hao Xiang
Wei Shao
Bolei Zhou
Jiaqi Ma
47
218
0
05 Jul 2022
WaveMix: A Resource-efficient Neural Network for Image Analysis
WaveMix: A Resource-efficient Neural Network for Image Analysis
Pranav Jeevan
Kavitha Viswanathan
S. AnanduA
A. Sethi
15
20
0
28 May 2022
MAD: Self-Supervised Masked Anomaly Detection Task for Multivariate Time
  Series
MAD: Self-Supervised Masked Anomaly Detection Task for Multivariate Time Series
Yiwei Fu
Feng Xue
AI4TS
21
15
0
04 May 2022
To Know by the Company Words Keep and What Else Lies in the Vicinity
To Know by the Company Words Keep and What Else Lies in the Vicinity
Jake Williams
H. Heidenreich
16
0
0
30 Apr 2022
Give Me Your Attention: Dot-Product Attention Considered Harmful for
  Adversarial Patch Robustness
Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness
Giulio Lovisotto
Nicole Finnie
Mauricio Muñoz
Chaithanya Kumar Mummadi
J. H. Metzen
AAML
ViT
22
32
0
25 Mar 2022
Efficient Language Modeling with Sparse all-MLP
Efficient Language Modeling with Sparse all-MLP
Ping Yu
Mikel Artetxe
Myle Ott
Sam Shleifer
Hongyu Gong
Ves Stoyanov
Xian Li
MoE
15
11
0
14 Mar 2022
Contextformer: A Transformer with Spatio-Channel Attention for Context
  Modeling in Learned Image Compression
Contextformer: A Transformer with Spatio-Channel Attention for Context Modeling in Learned Image Compression
A. B. Koyuncu
Han Gao
Atanas Boev
Georgii Gaikov
Elena Alshina
Eckehard Steinbach
ViT
31
68
0
04 Mar 2022
DCT-Former: Efficient Self-Attention with Discrete Cosine Transform
DCT-Former: Efficient Self-Attention with Discrete Cosine Transform
Carmelo Scribano
Giorgia Franchini
M. Prato
Marko Bertogna
18
21
0
02 Mar 2022
Video Transformers: A Survey
Video Transformers: A Survey
Javier Selva
A. S. Johansen
Sergio Escalera
Kamal Nasrollahi
T. Moeslund
Albert Clapés
ViT
22
103
0
16 Jan 2022
ConvMixer: Feature Interactive Convolution with Curriculum Learning for
  Small Footprint and Noisy Far-field Keyword Spotting
ConvMixer: Feature Interactive Convolution with Curriculum Learning for Small Footprint and Noisy Far-field Keyword Spotting
Dianwen Ng
Yunqi Chen
Biao Tian
Qiang Fu
Chng Eng Siong
16
46
0
15 Jan 2022
A Novel Deep Parallel Time-series Relation Network for Fault Diagnosis
A Novel Deep Parallel Time-series Relation Network for Fault Diagnosis
Chun Yang
AI4TS
AI4CE
27
4
0
03 Dec 2021
Blending Anti-Aliasing into Vision Transformer
Blending Anti-Aliasing into Vision Transformer
Shengju Qian
Hao Shao
Yi Zhu
Mu Li
Jiaya Jia
23
20
0
28 Oct 2021
The Efficiency Misnomer
The Efficiency Misnomer
Daoyuan Chen
Liuyi Yao
Dawei Gao
Ashish Vaswani
Yaliang Li
32
98
0
25 Oct 2021
FDGATII : Fast Dynamic Graph Attention with Initial Residual and
  Identity Mapping
FDGATII : Fast Dynamic Graph Attention with Initial Residual and Identity Mapping
Gayan K. Kulatilleke
Marius Portmann
Ryan K. L. Ko
Shekhar S. Chandra
17
9
0
21 Oct 2021
Inductive Biases and Variable Creation in Self-Attention Mechanisms
Inductive Biases and Variable Creation in Self-Attention Mechanisms
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Cyril Zhang
27
115
0
19 Oct 2021
Use of Deterministic Transforms to Design Weight Matrices of a Neural
  Network
Use of Deterministic Transforms to Design Weight Matrices of a Neural Network
Pol Grau Jurado
Xinyue Liang
Alireza M. Javid
S. Chatterjee
14
0
0
06 Oct 2021
Efficient and Private Federated Learning with Partially Trainable
  Networks
Efficient and Private Federated Learning with Partially Trainable Networks
Hakim Sidahmed
Zheng Xu
Ankush Garg
Yuan Cao
Mingqing Chen
FedML
49
13
0
06 Oct 2021
Oscillatory Fourier Neural Network: A Compact and Efficient Architecture
  for Sequential Processing
Oscillatory Fourier Neural Network: A Compact and Efficient Architecture for Sequential Processing
Bing Han
Cheng Wang
Kaushik Roy
26
7
0
14 Sep 2021
Large-Scale Differentially Private BERT
Large-Scale Differentially Private BERT
Rohan Anil
Badih Ghazi
Vineet Gupta
Ravi Kumar
Pasin Manurangsi
33
131
0
03 Aug 2021
Global Filter Networks for Image Classification
Global Filter Networks for Image Classification
Yongming Rao
Wenliang Zhao
Zheng Zhu
Jiwen Lu
Jie Zhou
ViT
16
450
0
01 Jul 2021
WAX-ML: A Python library for machine learning and feedback loops on
  streaming data
WAX-ML: A Python library for machine learning and feedback loops on streaming data
Emmanuel Sérié
KELM
AI4CE
15
0
0
11 Jun 2021
Choose a Transformer: Fourier or Galerkin
Choose a Transformer: Fourier or Galerkin
Shuhao Cao
39
220
0
31 May 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
271
2,603
0
04 May 2021
Fourier Neural Operator for Parametric Partial Differential Equations
Fourier Neural Operator for Parametric Partial Differential Equations
Zong-Yi Li
Nikola B. Kovachki
Kamyar Azizzadenesheli
Burigede Liu
K. Bhattacharya
Andrew M. Stuart
Anima Anandkumar
AI4CE
205
2,282
0
18 Oct 2020
Efficient Transformers: A Survey
Efficient Transformers: A Survey
Yi Tay
Mostafa Dehghani
Dara Bahri
Donald Metzler
VLM
74
1,101
0
14 Sep 2020
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
274
2,013
0
28 Jul 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
243
579
0
12 Mar 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,956
0
20 Apr 2018
Previous
12