Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2307.03073
Cited By
Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning
6 July 2023
P. JishnuJaykumar
Kamalesh Palanisamy
Yu-Wei Chao
Xinya Du
Yu Xiang
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning"
8 / 8 papers shown
Title
Can Masked Autoencoders Also Listen to Birds?
Lukas Rauch
Ilyass Moummad
René Heinrich
Alexis Joly
Bernhard Sick
Christoph Scholz
33
0
0
17 Apr 2025
Logits DeConfusion with CLIP for Few-Shot Learning
Shuo Li
F. Liu
Zehua Hao
Xingyu Wang
Lingling Li
Xianglong Liu
Puhua Chen
Wenping Ma
VLM
54
0
0
16 Apr 2025
Adapting Pre-Trained Vision Models for Novel Instance Detection and Segmentation
Ya Lu
Jishnu Jaykumar
Yunhui Guo
Nicholas Ruozzi
Yu Xiang
VLM
ISeg
62
4
0
28 May 2024
A Survey on Multimodal Large Language Models for Autonomous Driving
Can Cui
Yunsheng Ma
Xu Cao
Wenqian Ye
Yang Zhou
...
Xinrui Yan
Shuqi Mei
Jianguo Cao
Ziran Wang
Chao Zheng
48
258
0
21 Nov 2023
LLM4Drive: A Survey of Large Language Models for Autonomous Driving
Zhenjie Yang
Xiaosong Jia
Hongyang Li
Junchi Yan
ELM
47
97
0
02 Nov 2023
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
350
2,286
0
02 Sep 2021
CrossTransformers: spatially-aware few-shot transfer
Carl Doersch
Ankush Gupta
Andrew Zisserman
ViT
215
330
0
22 Jul 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
481
11,715
0
09 Mar 2017
1