Accumulated Trivial Attention Matters in Vision Transformers on Small
  Datasets

Accumulated Trivial Attention Matters in Vision Transformers on Small Datasets

    ViT

Papers citing "Accumulated Trivial Attention Matters in Vision Transformers on Small Datasets"

28 / 28 papers shown
Title
DaViT: Dual Attention Vision Transformers
DaViT: Dual Attention Vision Transformers
Mingyu Ding
Bin Xiao
Noel Codella
Ping Luo
Jingdong Wang
Lu Yuan
130
250
0
07 Apr 2022

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.