ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.05962
  4. Cited By
Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain
  Analysis: From Theory to Practice

Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice

9 March 2022
Peihao Wang
Wenqing Zheng
Tianlong Chen
Zhangyang Wang
    ViT
ArXivPDFHTML

Papers citing "Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice"

24 / 74 papers shown
Title
Diverse Cotraining Makes Strong Semi-Supervised Segmentor
Diverse Cotraining Makes Strong Semi-Supervised Segmentor
Yijiang Li
Xinjiang Wang
Lihe Yang
Xue Jiang
Wayne Zhang
Ying Gao
27
15
0
18 Aug 2023
Revisiting Vision Transformer from the View of Path Ensemble
Revisiting Vision Transformer from the View of Path Ensemble
Shuning Chang
Pichao Wang
Haowen Luo
Fan Wang
Mike Zheng Shou
ViT
37
3
0
12 Aug 2023
Towards Building More Robust Models with Frequency Bias
Towards Building More Robust Models with Frequency Bias
Qingwen Bu
Dong Huang
Heming Cui
AAML
17
10
0
19 Jul 2023
3DSAM-adapter: Holistic Adaptation of SAM from 2D to 3D for Promptable
  Medical Image Segmentation
3DSAM-adapter: Holistic Adaptation of SAM from 2D to 3D for Promptable Medical Image Segmentation
Shizhan Gong
Yuan Zhong
Wenao Ma
Jinpeng Li
Zhao Wang
Jingyang Zhang
Pheng-Ann Heng
Qi Dou
MedIm
20
72
0
23 Jun 2023
Multi-Architecture Multi-Expert Diffusion Models
Multi-Architecture Multi-Expert Diffusion Models
Yunsung Lee
Jin-Young Kim
Hyojun Go
Myeongho Jeong
Shinhyeok Oh
Seungtaek Choi
DiffM
31
29
0
08 Jun 2023
Centered Self-Attention Layers
Centered Self-Attention Layers
Ameen Ali
Tomer Galanti
Lior Wolf
32
6
0
02 Jun 2023
SmartTrim: Adaptive Tokens and Attention Pruning for Efficient
  Vision-Language Models
SmartTrim: Adaptive Tokens and Attention Pruning for Efficient Vision-Language Models
Zekun Wang
Jingchang Chen
Wangchunshu Zhou
Haichao Zhu
Jiafeng Liang
Liping Shan
Ming Liu
Dongliang Xu
Qing Yang
Bing Qin
VLM
24
4
0
24 May 2023
ScatterFormer: Locally-Invariant Scattering Transformer for
  Patient-Independent Multispectral Detection of Epileptiform Discharges
ScatterFormer: Locally-Invariant Scattering Transformer for Patient-Independent Multispectral Detection of Epileptiform Discharges
Rui-Hua Zheng
Jun Yu Li
Yi Wang
Tian Luo
Yuguo Yu
MedIm
40
4
0
26 Apr 2023
Token Contrast for Weakly-Supervised Semantic Segmentation
Token Contrast for Weakly-Supervised Semantic Segmentation
Lixiang Ru
Heliang Zheng
Yibing Zhan
Bo Du
ViT
37
86
0
02 Mar 2023
Specformer: Spectral Graph Neural Networks Meet Transformers
Specformer: Spectral Graph Neural Networks Meet Transformers
Deyu Bo
Chuan Shi
Lele Wang
Renjie Liao
76
80
0
02 Mar 2023
Are More Layers Beneficial to Graph Transformers?
Are More Layers Beneficial to Graph Transformers?
Haiteng Zhao
Shuming Ma
Dongdong Zhang
Zhi-Hong Deng
Furu Wei
27
12
0
01 Mar 2023
EIT: Enhanced Interactive Transformer
EIT: Enhanced Interactive Transformer
Tong Zheng
Bei Li
Huiwen Bao
Tong Xiao
Jingbo Zhu
32
2
0
20 Dec 2022
MogaNet: Multi-order Gated Aggregation Network
MogaNet: Multi-order Gated Aggregation Network
Siyuan Li
Zedong Wang
Zicheng Liu
Cheng Tan
Haitao Lin
Di Wu
Zhiyuan Chen
Jiangbin Zheng
Stan Z. Li
26
55
0
07 Nov 2022
M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task
  Learning with Model-Accelerator Co-design
M3^33ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Hanxue Liang
Zhiwen Fan
Rishov Sarkar
Ziyu Jiang
Tianlong Chen
Kai Zou
Yu Cheng
Cong Hao
Zhangyang Wang
MoE
42
81
0
26 Oct 2022
Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again
Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again
Ajay Jaiswal
Peihao Wang
Tianlong Chen
Justin F. Rousseau
Ying Ding
Zhangyang Wang
35
10
0
14 Oct 2022
Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer?
Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer?
Yi Wang
Zhiwen Fan
Tianlong Chen
Hehe Fan
Zhangyang Wang
ViT
53
9
0
15 Sep 2022
Learning Spatial-Frequency Transformer for Visual Object Tracking
Learning Spatial-Frequency Transformer for Visual Object Tracking
Chuanming Tang
Tianlin Li
Yuanchao Bai
Zhe Wu
Jianlin Zhang
Yongmei Huang
ViT
37
43
0
18 Aug 2022
A Study on Transformer Configuration and Training Objective
A Study on Transformer Configuration and Training Objective
Fuzhao Xue
Jianghai Chen
Aixin Sun
Xiaozhe Ren
Zangwei Zheng
Xiaoxin He
Yongming Chen
Xin Jiang
Yang You
33
7
0
21 May 2022
Improving Vision Transformers by Revisiting High-frequency Components
Improving Vision Transformers by Revisiting High-frequency Components
Jiawang Bai
Liuliang Yuan
Shutao Xia
Shuicheng Yan
Zhifeng Li
Wei Liu
ViT
16
90
0
03 Apr 2022
What Do Adversarially trained Neural Networks Focus: A Fourier
  Domain-based Study
What Do Adversarially trained Neural Networks Focus: A Fourier Domain-based Study
Binxiao Huang
Chaofan Tao
R. Lin
Ngai Wong
AAML
OOD
12
3
0
16 Mar 2022
Symbolic Learning to Optimize: Towards Interpretability and Scalability
Symbolic Learning to Optimize: Towards Interpretability and Scalability
Wenqing Zheng
Tianlong Chen
Ting-Kuei Hu
Zhangyang Wang
45
19
0
13 Mar 2022
Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive
  Benchmark Study
Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study
Tianlong Chen
Kaixiong Zhou
Keyu Duan
Wenqing Zheng
Peihao Wang
Xia Hu
Zhangyang Wang
AAML
GNN
27
61
0
24 Aug 2021
GeoT: A Geometry-aware Transformer for Reliable Molecular Property
  Prediction and Chemically Interpretable Representation Learning
GeoT: A Geometry-aware Transformer for Reliable Molecular Property Prediction and Chemically Interpretable Representation Learning
Bumju Kwak
J. Park
Taewon Kang
Jeonghee Jo
Byunghan Lee
Sungroh Yoon
AI4CE
32
6
0
29 Jun 2021
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
148
602
0
14 Feb 2016
Previous
12