Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.11689
Cited By
Lightweight Model Pre-training via Language Guided Knowledge Distillation
17 June 2024
Mingsheng Li
Lin Zhang
Mingzhen Zhu
Zilong Huang
Gang Yu
Jiayuan Fan
Tao Chen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Lightweight Model Pre-training via Language Guided Knowledge Distillation"
5 / 5 papers shown
Title
PointCLIP: Point Cloud Understanding by CLIP
Renrui Zhang
Ziyu Guo
Wei Zhang
Kunchang Li
Xupeng Miao
Bin Cui
Yu Qiao
Peng Gao
Hongsheng Li
VLM
3DPC
166
435
0
04 Dec 2021
Self-Supervised Learning by Estimating Twin Class Distributions
Feng Wang
Tao Kong
Rufeng Zhang
Huaping Liu
Hang Li
SSL
55
16
0
14 Oct 2021
ActionCLIP: A New Paradigm for Video Action Recognition
Mengmeng Wang
Jiazheng Xing
Yong Liu
VLM
149
362
0
17 Sep 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
298
3,693
0
11 Feb 2021
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
236
190
0
12 Jan 2021
1