Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.07584
Cited By
DS-ViT: Dual-Stream Vision Transformer for Cross-Task Distillation in Alzheimer's Early Diagnosis
11 September 2024
Ke Chen
Yifeng Wang
Yufei Zhou
Haohan Wang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DS-ViT: Dual-Stream Vision Transformer for Cross-Task Distillation in Alzheimer's Early Diagnosis"
5 / 5 papers shown
Title
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
195
6,657
0
23 Dec 2020
Knowledge Distillation from Internal Representations
Gustavo Aguilar
Yuan Ling
Yu Zhang
Benjamin Yao
Xing Fan
Edward Guo
43
179
0
08 Oct 2019
Learning Efficient Convolutional Networks through Network Slimming
Zhuang Liu
Jianguo Li
Zhiqiang Shen
Gao Huang
Shoumeng Yan
Changshui Zhang
87
2,407
0
22 Aug 2017
Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
João Carreira
Andrew Zisserman
170
7,961
0
22 May 2017
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
496
36,599
0
25 Aug 2016
1