Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.16637
Cited By
Zero-Shot Distillation for Image Encoders: How to Make Effective Use of Synthetic Data
25 April 2024
Niclas Popp
J. H. Metzen
Matthias Hein
VLM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Zero-Shot Distillation for Image Encoders: How to Make Effective Use of Synthetic Data"
16 / 16 papers shown
Title
What Do You See? Enhancing Zero-Shot Image Classification with Multimodal Large Language Models
Abdelrahman Abdelhamed
Mahmoud Afifi
Alec Go
MLLM
VLM
105
3
0
24 May 2024
Learning Vision from Models Rivals Learning Vision from Data
Yonglong Tian
Lijie Fan
Kaifeng Chen
Dina Katabi
Dilip Krishnan
Phillip Isola
85
51
0
28 Dec 2023
Diversify, Don't Fine-Tune: Scaling Up Visual Recognition Training with Synthetic Images
Zhuoran Yu
Chenchen Zhu
Sean Culatana
Raghuraman Krishnamoorthi
Fanyi Xiao
Yong Jae Lee
171
15
0
04 Dec 2023
CLIP-KD: An Empirical Study of CLIP Model Distillation
Chuanguang Yang
Zhulin An
Libo Huang
Junyu Bi
Xinqiang Yu
Hansheng Yang
Boyu Diao
Yongjun Xu
VLM
100
35
0
24 Jul 2023
Synthetic Data from Diffusion Models Improves ImageNet Classification
Shekoofeh Azizi
Simon Kornblith
Chitwan Saharia
Mohammad Norouzi
David J. Fleet
VLM
DiffM
112
315
0
17 Apr 2023
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Junnan Li
Dongxu Li
Silvio Savarese
Steven C. H. Hoi
VLM
MLLM
432
4,656
0
30 Jan 2023
Vision-Language Pre-training: Basics, Recent Advances, and Future Trends
Zhe Gan
Linjie Li
Chunyuan Li
Lijuan Wang
Zicheng Liu
Jianfeng Gao
VLM
65
167
0
17 Oct 2022
Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Ruifei He
Shuyang Sun
Jihan Yang
Song Bai
Xiaojuan Qi
89
37
0
10 Mar 2022
LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs
Christoph Schuhmann
Richard Vencu
Romain Beaumont
R. Kaczmarczyk
Clayton Mullis
Aarush Katta
Theo Coombes
J. Jitsev
Aran Komatsuzaki
VLM
MLLM
CLIP
243
1,444
0
03 Nov 2021
Does Knowledge Distillation Really Work?
Samuel Stanton
Pavel Izmailov
Polina Kirichenko
Alexander A. Alemi
A. Wilson
FedML
69
222
0
10 Jun 2021
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Victor Sanh
Lysandre Debut
Julien Chaumond
Thomas Wolf
267
7,554
0
02 Oct 2019
Contrastive Multiview Coding
Yonglong Tian
Dilip Krishnan
Phillip Isola
SSL
182
2,412
0
13 Jun 2019
Learning Robust Global Representations by Penalizing Local Predictive Power
Haohan Wang
Songwei Ge
Eric Xing
Zachary Chase Lipton
OOD
122
967
0
29 May 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DV
MedIm
172
18,224
0
28 May 2019
Zero-Shot Knowledge Distillation in Deep Networks
Gaurav Kumar Nayak
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
Anirban Chakraborty
75
245
0
20 May 2019
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Dan Hendrycks
Thomas G. Dietterich
OOD
VLM
196
3,458
0
28 Mar 2019
1