ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.05208
  4. Cited By
Supervision Exists Everywhere: A Data Efficient Contrastive
  Language-Image Pre-training Paradigm

Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm

11 October 2021
Yangguang Li
Feng Liang
Lichen Zhao
Yufeng Cui
Wanli Ouyang
Jing Shao
F. Yu
Junjie Yan
    VLM
    CLIP
ArXivPDFHTML

Papers citing "Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm"

50 / 324 papers shown
Title
Unified Lexical Representation for Interpretable Visual-Language
  Alignment
Unified Lexical Representation for Interpretable Visual-Language Alignment
Yifan Li
Yikai Wang
Yanwei Fu
Dongyu Ru
Zheng-Wei Zhang
Tong He
VLM
42
4
0
25 Jul 2024
Assessing Brittleness of Image-Text Retrieval Benchmarks from
  Vision-Language Models Perspective
Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective
Mariya Hendriksen
Shuo Zhang
R. Reinanda
Mohamed Yahya
Edgar Meij
Maarten de Rijke
54
0
0
21 Jul 2024
Aligning Sight and Sound: Advanced Sound Source Localization Through
  Audio-Visual Alignment
Aligning Sight and Sound: Advanced Sound Source Localization Through Audio-Visual Alignment
Arda Senocak
H. Ryu
Junsik Kim
Tae-Hyun Oh
Hanspeter Pfister
Joon Son Chung
38
3
0
18 Jul 2024
Hypergraph Multi-modal Large Language Model: Exploiting EEG and
  Eye-tracking Modalities to Evaluate Heterogeneous Responses for Video
  Understanding
Hypergraph Multi-modal Large Language Model: Exploiting EEG and Eye-tracking Modalities to Evaluate Heterogeneous Responses for Video Understanding
Minghui Wu
Chenxu Zhao
Anyang Su
Donglin Di
Tianyu Fu
...
Min He
Ya Gao
Meng Ma
Kun Yan
Ping Wang
32
0
0
11 Jul 2024
Semantic Compositions Enhance Vision-Language Contrastive Learning
Semantic Compositions Enhance Vision-Language Contrastive Learning
Maxwell Mbabilla Aladago
Lorenzo Torresani
Soroush Vosoughi
CoGe
VLM
CLIP
41
0
0
01 Jul 2024
ET tu, CLIP? Addressing Common Object Errors for Unseen Environments
ET tu, CLIP? Addressing Common Object Errors for Unseen Environments
Ye Won Byun
Cathy Jiao
Shahriar Noroozizadeh
Jimin Sun
Rosa Vitiello
VLM
44
1
0
25 Jun 2024
Mitigate the Gap: Investigating Approaches for Improving Cross-Modal
  Alignment in CLIP
Mitigate the Gap: Investigating Approaches for Improving Cross-Modal Alignment in CLIP
Sedigheh Eslami
Gerard de Melo
VLM
43
3
0
25 Jun 2024
Lightweight Model Pre-training via Language Guided Knowledge
  Distillation
Lightweight Model Pre-training via Language Guided Knowledge Distillation
Mingsheng Li
Lin Zhang
Mingzhen Zhu
Zilong Huang
Gang Yu
Jiayuan Fan
Tao Chen
41
1
0
17 Jun 2024
Industrial Language-Image Dataset (ILID): Adapting Vision Foundation
  Models for Industrial Settings
Industrial Language-Image Dataset (ILID): Adapting Vision Foundation Models for Industrial Settings
Keno Moenck
Duc Trung Thieu
Julian Koch
Thorsten Schuppstuhl
VLM
27
0
0
14 Jun 2024
Exploring the Spectrum of Visio-Linguistic Compositionality and
  Recognition
Exploring the Spectrum of Visio-Linguistic Compositionality and Recognition
Youngtaek Oh
Pyunghwan Ahn
Jinhyung Kim
Gwangmo Song
Soonyoung Lee
In So Kweon
Junmo Kim
CoGe
48
2
0
13 Jun 2024
ConMe: Rethinking Evaluation of Compositional Reasoning for Modern VLMs
ConMe: Rethinking Evaluation of Compositional Reasoning for Modern VLMs
Irene Huang
Wei Lin
M. Jehanzeb Mirza
Jacob A. Hansen
Sivan Doveh
...
Trevor Darrel
Chuang Gan
Aude Oliva
Rogerio Feris
Leonid Karlinsky
CoGe
LRM
43
7
0
12 Jun 2024
Benchmarking Vision-Language Contrastive Methods for Medical
  Representation Learning
Benchmarking Vision-Language Contrastive Methods for Medical Representation Learning
Shuvendu Roy
Yasaman Parhizkar
Franklin Ogidi
Vahid Reza Khazaie
Michael Colacci
Ali Etemad
Elham Dolatabadi
Arash Afkanpour
VLM
52
1
0
11 Jun 2024
RWKV-CLIP: A Robust Vision-Language Representation Learner
RWKV-CLIP: A Robust Vision-Language Representation Learner
Tiancheng Gu
Kaicheng Yang
Xiang An
Ziyong Feng
Dongnan Liu
Weidong Cai
Jiankang Deng
VLM
CLIP
40
13
0
11 Jun 2024
Gentle-CLIP: Exploring Aligned Semantic In Low-Quality Multimodal Data
  With Soft Alignment
Gentle-CLIP: Exploring Aligned Semantic In Low-Quality Multimodal Data With Soft Alignment
Zijia Song
Z. Zang
Yelin Wang
Guozheng Yang
Jiangbin Zheng
Kaicheng Yu
Wanyu Chen
Stan Z. Li
39
1
0
09 Jun 2024
ALGO: Object-Grounded Visual Commonsense Reasoning for Open-World
  Egocentric Action Recognition
ALGO: Object-Grounded Visual Commonsense Reasoning for Open-World Egocentric Action Recognition
Sanjoy Kundu
Shubham Trehan
Sathyanarayanan N. Aakur
LM&Ro
LRM
43
0
0
09 Jun 2024
MLIP: Efficient Multi-Perspective Language-Image Pretraining with
  Exhaustive Data Utilization
MLIP: Efficient Multi-Perspective Language-Image Pretraining with Exhaustive Data Utilization
Yu Zhang
Qi Zhang
Zixuan Gong
Yiwei Shi
Yepeng Liu
...
Ke Liu
Kun Yi
Wei Fan
Liang Hu
Changwei Wang
CLIP
VLM
61
3
0
03 Jun 2024
ED-SAM: An Efficient Diffusion Sampling Approach to Domain
  Generalization in Vision-Language Foundation Models
ED-SAM: An Efficient Diffusion Sampling Approach to Domain Generalization in Vision-Language Foundation Models
Thanh-Dat Truong
Xin Li
Bhiksha Raj
Jackson Cothren
Khoa Luu
DiffM
VLM
54
1
0
03 Jun 2024
Can't make an Omelette without Breaking some Eggs: Plausible Action
  Anticipation using Large Video-Language Models
Can't make an Omelette without Breaking some Eggs: Plausible Action Anticipation using Large Video-Language Models
Himangi Mittal
Nakul Agarwal
Shao-Yuan Lo
Kwonjoon Lee
44
14
0
30 May 2024
Diagnosing the Compositional Knowledge of Vision Language Models from a
  Game-Theoretic View
Diagnosing the Compositional Knowledge of Vision Language Models from a Game-Theoretic View
Jin Wang
Shichao Dong
Yapeng Zhu
Kelu Yao
Weidong Zhao
Chao Li
Ping Luo
CoGe
LRM
48
2
0
27 May 2024
Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias
  Towards Vision-Language Tasks
Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias Towards Vision-Language Tasks
Yunqi Zhang
Songda Li
Chunyuan Deng
Luyi Wang
Hui Zhao
31
0
0
27 May 2024
CLIBD: Bridging Vision and Genomics for Biodiversity Monitoring at Scale
CLIBD: Bridging Vision and Genomics for Biodiversity Monitoring at Scale
ZeMing Gong
Austin T. Wang
Joakim Bruslund Haurum
Scott C. Lowe
Graham W. Taylor
Angel X. Chang
Angel X. Chang
39
5
0
27 May 2024
CRoFT: Robust Fine-Tuning with Concurrent Optimization for OOD
  Generalization and Open-Set OOD Detection
CRoFT: Robust Fine-Tuning with Concurrent Optimization for OOD Generalization and Open-Set OOD Detection
Lin Zhu
Yifeng Yang
Qinying Gu
Xinbing Wang
Cheng Zhou
Nanyang Ye
VLM
34
2
0
26 May 2024
Distilling Vision-Language Pretraining for Efficient Cross-Modal
  Retrieval
Distilling Vision-Language Pretraining for Efficient Cross-Modal Retrieval
Young Kyun Jang
Donghyun Kim
Ser-nam Lim
VLM
21
0
0
23 May 2024
Towards Cross-modal Backward-compatible Representation Learning for
  Vision-Language Models
Towards Cross-modal Backward-compatible Representation Learning for Vision-Language Models
Young Kyun Jang
Ser-nam Lim
VLM
37
1
0
23 May 2024
Mammo-CLIP: A Vision Language Foundation Model to Enhance Data
  Efficiency and Robustness in Mammography
Mammo-CLIP: A Vision Language Foundation Model to Enhance Data Efficiency and Robustness in Mammography
Shantanu Ghosh
Clare B. Poynton
Shyam Visweswaran
Kayhan Batmanghelich
VLM
37
10
0
20 May 2024
Enhancing Fine-Grained Image Classifications via Cascaded Vision
  Language Models
Enhancing Fine-Grained Image Classifications via Cascaded Vision Language Models
Canshi Wei
VLM
32
0
0
18 May 2024
FFF: Fixing Flawed Foundations in contrastive pre-training results in
  very strong Vision-Language models
FFF: Fixing Flawed Foundations in contrastive pre-training results in very strong Vision-Language models
Adrian Bulat
Yassine Ouali
Georgios Tzimiropoulos
VLM
45
4
0
16 May 2024
When LLMs step into the 3D World: A Survey and Meta-Analysis of 3D Tasks
  via Multi-modal Large Language Models
When LLMs step into the 3D World: A Survey and Meta-Analysis of 3D Tasks via Multi-modal Large Language Models
Xianzheng Ma
Yash Bhalgat
Brandon Smart
Shuai Chen
Xinghui Li
...
Matthias Nießner
Ian D Reid
Angel X. Chang
Iro Laina
V. Prisacariu
LRM
33
13
0
16 May 2024
Self-supervised vision-langage alignment of deep learning
  representations for bone X-rays analysis
Self-supervised vision-langage alignment of deep learning representations for bone X-rays analysis
A. Englebert
Anne-Sophie Collin
O. Cornu
Christophe De Vleeschouwer
34
1
0
14 May 2024
Enhanced Multimodal Content Moderation of Children's Videos using
  Audiovisual Fusion
Enhanced Multimodal Content Moderation of Children's Videos using Audiovisual Fusion
Syed Hammad Ahmed
M. Khan
G. Sukthankar
31
1
0
09 May 2024
LangCell: Language-Cell Pre-training for Cell Identity Understanding
LangCell: Language-Cell Pre-training for Cell Identity Understanding
Suyuan Zhao
Jiahuan Zhang
Yushuai Wu
Yizhen Luo
Zaiqing Nie
VLM
39
6
0
09 May 2024
Understanding Retrieval-Augmented Task Adaptation for Vision-Language
  Models
Understanding Retrieval-Augmented Task Adaptation for Vision-Language Models
Yifei Ming
Yixuan Li
VLM
39
7
0
02 May 2024
Enhancing Interactive Image Retrieval With Query Rewriting Using Large
  Language Models and Vision Language Models
Enhancing Interactive Image Retrieval With Query Rewriting Using Large Language Models and Vision Language Models
Hongyi Zhu
Jia-Hong Huang
S. Rudinac
Evangelos Kanoulas
44
7
0
29 Apr 2024
MediFact at MEDIQA-M3G 2024: Medical Question Answering in Dermatology
  with Multimodal Learning
MediFact at MEDIQA-M3G 2024: Medical Question Answering in Dermatology with Multimodal Learning
Nadia Saeed
MedIm
36
2
0
27 Apr 2024
Training-Free Unsupervised Prompt for Vision-Language Models
Training-Free Unsupervised Prompt for Vision-Language Models
Sifan Long
Linbin Wang
Zhen Zhao
Zichang Tan
Yiming Wu
Shengsheng Wang
Jingdong Wang
VLM
VPVLM
43
1
0
25 Apr 2024
Conditional Prototype Rectification Prompt Learning
Conditional Prototype Rectification Prompt Learning
Haoxing Chen
Yaohui Li
Zizheng Huang
Yan Hong
Zhuoer Xu
Zhangxuan Gu
Jun Lan
Huijia Zhu
Weiqiang Wang
VLM
50
3
0
15 Apr 2024
RankCLIP: Ranking-Consistent Language-Image Pretraining
RankCLIP: Ranking-Consistent Language-Image Pretraining
Yiming Zhang
Zhuokai Zhao
Zhaorun Chen
Zhili Feng
Zenghui Ding
Yining Sun
SSL
VLM
51
7
0
15 Apr 2024
Transductive Zero-Shot and Few-Shot CLIP
Transductive Zero-Shot and Few-Shot CLIP
Ségolène Martin
Yunshi Huang
Fereshteh Shakeri
J. Pesquet
Ismail Ben Ayed
BDL
VLM
38
13
0
08 Apr 2024
Would Deep Generative Models Amplify Bias in Future Models?
Would Deep Generative Models Amplify Bias in Future Models?
Tianwei Chen
Yusuke Hirota
Mayu Otani
Noa Garcia
Yuta Nakashima
45
12
0
04 Apr 2024
Training-Free Semantic Segmentation via LLM-Supervision
Training-Free Semantic Segmentation via LLM-Supervision
Wenfang Sun
Yingjun Du
Gaowen Liu
Ramana Rao Kompella
Cees G. M. Snoek
VLM
44
2
0
31 Mar 2024
Heterogeneous Contrastive Learning for Foundation Models and Beyond
Heterogeneous Contrastive Learning for Foundation Models and Beyond
Lecheng Zheng
Baoyu Jing
Zihao Li
Hanghang Tong
Jingrui He
VLM
38
19
0
30 Mar 2024
Toward Interactive Regional Understanding in Vision-Large Language
  Models
Toward Interactive Regional Understanding in Vision-Large Language Models
Jungbeom Lee
Sanghyuk Chun
Sangdoo Yun
VLM
28
1
0
27 Mar 2024
DreamLIP: Language-Image Pre-training with Long Captions
DreamLIP: Language-Image Pre-training with Long Captions
Kecheng Zheng
Yifei Zhang
Wei Wu
Fan Lu
Shuailei Ma
Xin Jin
Wei Chen
Yujun Shen
VLM
CLIP
47
26
0
25 Mar 2024
UrbanVLP: Multi-Granularity Vision-Language Pretraining for Urban Socioeconomic Indicator Prediction
UrbanVLP: Multi-Granularity Vision-Language Pretraining for Urban Socioeconomic Indicator Prediction
Xixuan Hao
Wei Chen
Yibo Yan
Siru Zhong
Kun Wang
Qingsong Wen
Keli Zhang
VLM
79
0
0
25 Mar 2024
Enhancing Video Transformers for Action Understanding with VLM-aided
  Training
Enhancing Video Transformers for Action Understanding with VLM-aided Training
Hui Lu
Hu Jian
Ronald Poppe
A. A. Salah
42
1
0
24 Mar 2024
Long-CLIP: Unlocking the Long-Text Capability of CLIP
Long-CLIP: Unlocking the Long-Text Capability of CLIP
Beichen Zhang
Pan Zhang
Xiao-wen Dong
Yuhang Zang
Jiaqi Wang
CLIP
VLM
45
110
0
22 Mar 2024
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
150
318
0
21 Mar 2024
Towards Multimodal In-Context Learning for Vision & Language Models
Towards Multimodal In-Context Learning for Vision & Language Models
Sivan Doveh
Shaked Perek
M. Jehanzeb Mirza
Wei Lin
Amit Alfassy
Assaf Arbelle
S. Ullman
Leonid Karlinsky
VLM
114
14
0
19 Mar 2024
Data-Efficient Contrastive Language-Image Pretraining: Prioritizing Data
  Quality over Quantity
Data-Efficient Contrastive Language-Image Pretraining: Prioritizing Data Quality over Quantity
Siddharth Joshi
Arnav Jain
Ali Payani
Baharan Mirzasoleiman
VLM
CLIP
33
8
0
18 Mar 2024
PosSAM: Panoptic Open-vocabulary Segment Anything
PosSAM: Panoptic Open-vocabulary Segment Anything
VS Vibashan
Shubhankar Borse
Hyojin Park
Debasmit Das
Vishal M. Patel
Munawar Hayat
Fatih Porikli
VLM
MLLM
43
6
0
14 Mar 2024
Previous
1234567
Next