ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.05208
  4. Cited By
Supervision Exists Everywhere: A Data Efficient Contrastive
  Language-Image Pre-training Paradigm

Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm

11 October 2021
Yangguang Li
Feng Liang
Lichen Zhao
Yufeng Cui
Wanli Ouyang
Jing Shao
F. Yu
Junjie Yan
    VLM
    CLIP
ArXivPDFHTML

Papers citing "Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm"

50 / 324 papers shown
Title
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with
  Module-wise Pruning Error Metric
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric
Haokun Lin
Haoli Bai
Zhili Liu
Lu Hou
Muyi Sun
Linqi Song
Ying Wei
Zhenan Sun
CLIP
VLM
63
14
0
12 Mar 2024
Transformers and Language Models in Form Understanding: A Comprehensive
  Review of Scanned Document Analysis
Transformers and Language Models in Form Understanding: A Comprehensive Review of Scanned Document Analysis
Abdelrahman Abdallah
Daniel Eberharter
Zoe Pfister
Adam Jatowt
40
12
0
06 Mar 2024
Rethinking The Uniformity Metric in Self-Supervised Learning
Rethinking The Uniformity Metric in Self-Supervised Learning
Xianghong Fang
Jian Li
Qiang Sun
Benyou Wang
SSL
12
0
0
01 Mar 2024
Demonstrating and Reducing Shortcuts in Vision-Language Representation
  Learning
Demonstrating and Reducing Shortcuts in Vision-Language Representation Learning
Maurits J. R. Bleeker
Mariya Hendriksen
Andrew Yates
Maarten de Rijke
VLM
40
3
0
27 Feb 2024
Analysis of Using Sigmoid Loss for Contrastive Learning
Analysis of Using Sigmoid Loss for Contrastive Learning
Chungpa Lee
Joonhwan Chang
Jy-yong Sohn
48
2
0
20 Feb 2024
Assessing News Thumbnail Representativeness: Counterfactual text can
  enhance the cross-modal matching ability
Assessing News Thumbnail Representativeness: Counterfactual text can enhance the cross-modal matching ability
Yejun Yoon
Seunghyun Yoon
Kunwoo Park
23
0
0
17 Feb 2024
Analyzing the Roles of Language and Vision in Learning from Limited Data
Analyzing the Roles of Language and Vision in Learning from Limited Data
Allison Chen
Ilia Sucholutsky
Olga Russakovsky
Thomas L. Griffiths
VLM
29
2
0
15 Feb 2024
Overcoming the Pitfalls of Vision-Language Model Finetuning for OOD
  Generalization
Overcoming the Pitfalls of Vision-Language Model Finetuning for OOD Generalization
Yuhang Zang
Hanlin Goh
Josh Susskind
Chen Huang
VLM
39
12
0
29 Jan 2024
Towards 3D Molecule-Text Interpretation in Language Models
Towards 3D Molecule-Text Interpretation in Language Models
Sihang Li
Zhiyuan Liu
Yancheng Luo
Xiang Wang
Xiangnan He
Kenji Kawaguchi
Tat-Seng Chua
Qi Tian
AI4CE
35
42
0
25 Jan 2024
Exploring scalable medical image encoders beyond text supervision
Exploring scalable medical image encoders beyond text supervision
Fernando Pérez-García
Harshita Sharma
Sam Bond-Taylor
Kenza Bouzid
Valentina Salvatelli
...
Maria T. A. Wetscherek
Noel C. F. Codella
Stephanie L. Hyland
Javier Alvarez-Valle
Ozan Oktay
LM&MA
MedIm
50
26
0
19 Jan 2024
Question-Answer Cross Language Image Matching for Weakly Supervised
  Semantic Segmentation
Question-Answer Cross Language Image Matching for Weakly Supervised Semantic Segmentation
Songhe Deng
Wei Zhuo
Jinheng Xie
Linlin Shen
VLM
15
6
0
18 Jan 2024
FiGCLIP: Fine-Grained CLIP Adaptation via Densely Annotated Videos
FiGCLIP: Fine-Grained CLIP Adaptation via Densely Annotated Videos
S. DarshanSingh
Zeeshan Khan
Makarand Tapaswi
VLM
CLIP
36
3
0
15 Jan 2024
Few-shot Adaptation of Multi-modal Foundation Models: A Survey
Few-shot Adaptation of Multi-modal Foundation Models: A Survey
Fan Liu
Tianshu Zhang
Wenwen Dai
Wenwen Cai
Wenwen Cai Xiaocong Zhou
Delong Chen
VLM
OffRL
31
23
0
03 Jan 2024
3VL: Using Trees to Improve Vision-Language Models' Interpretability
3VL: Using Trees to Improve Vision-Language Models' Interpretability
Nir Yellinek
Leonid Karlinsky
Raja Giryes
CoGe
VLM
49
4
0
28 Dec 2023
Black-Box Tuning of Vision-Language Models with Effective Gradient
  Approximation
Black-Box Tuning of Vision-Language Models with Effective Gradient Approximation
Zixian Guo
Yuxiang Wei
Ming-Yu Liu
Zhilong Ji
Jinfeng Bai
Yiwen Guo
Wangmeng Zuo
VLM
36
8
0
26 Dec 2023
Misalign, Contrast then Distill: Rethinking Misalignments in
  Language-Image Pretraining
Misalign, Contrast then Distill: Rethinking Misalignments in Language-Image Pretraining
Bumsoo Kim
Yeonsik Jo
Jinhyung Kim
S. Kim
VLM
27
7
0
19 Dec 2023
Expediting Contrastive Language-Image Pretraining via Self-distilled
  Encoders
Expediting Contrastive Language-Image Pretraining via Self-distilled Encoders
Bumsoo Kim
Jinhyung Kim
Yeonsik Jo
S. Kim
VLM
26
3
0
19 Dec 2023
Domain Prompt Learning with Quaternion Networks
Domain Prompt Learning with Quaternion Networks
Qinglong Cao
Zhengqin Xu
Yuntian Chen
Chao Ma
Xiaokang Yang
VLM
42
10
0
12 Dec 2023
Medical Vision Language Pretraining: A survey
Medical Vision Language Pretraining: A survey
Prashant Shrestha
Sanskar Amgain
Bidur Khanal
Cristian A. Linte
Binod Bhattarai
VLM
34
14
0
11 Dec 2023
Improved Visual Grounding through Self-Consistent Explanations
Improved Visual Grounding through Self-Consistent Explanations
Ruozhen He
Paola Cascante-Bonilla
Ziyan Yang
Alexander C. Berg
Vicente Ordonez
ReLM
ObjD
LRM
FAtt
35
8
0
07 Dec 2023
LightCLIP: Learning Multi-Level Interaction for Lightweight
  Vision-Language Models
LightCLIP: Learning Multi-Level Interaction for Lightweight Vision-Language Models
Ying Nie
Wei He
Kai Han
Yehui Tang
Tianyu Guo
Fanyi Du
Yunhe Wang
VLM
19
3
0
01 Dec 2023
TeG-DG: Textually Guided Domain Generalization for Face Anti-Spoofing
TeG-DG: Textually Guided Domain Generalization for Face Anti-Spoofing
Lianrui Mu
Jianhong Bai
Xiaoxuan He
Jiangnan Ye
Xiaoyu Liang
Yuchen Yang
Jiedong Zhuang
Haoji Hu
27
2
0
30 Nov 2023
MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced
  Training
MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
Pavan Kumar Anasosalu Vasu
Hadi Pouransari
Fartash Faghri
Raviteja Vemulapalli
Oncel Tuzel
CLIP
VLM
33
43
0
28 Nov 2023
Beyond Sole Strength: Customized Ensembles for Generalized
  Vision-Language Models
Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models
Zhihe Lu
Jiawang Bai
Xin Li
Zeyu Xiao
Xinchao Wang
VLM
49
11
0
28 Nov 2023
CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts
CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts
Yichao Cai
Yuhang Liu
Zhen Zhang
Javen Qinfeng Shi
CLIP
VLM
34
5
0
28 Nov 2023
BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP
BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP
Jiawang Bai
Kuofeng Gao
Shaobo Min
Shu-Tao Xia
Zhifeng Li
Wei Liu
VLM
29
37
0
26 Nov 2023
Effective Backdoor Mitigation in Vision-Language Models Depends on the Pre-training Objective
Effective Backdoor Mitigation in Vision-Language Models Depends on the Pre-training Objective
Sahil Verma
Gantavya Bhatt
Avi Schwarzschild
Soumye Singhal
Arnav M. Das
Chirag Shah
John P Dickerson
Jeff Bilmes
J. Bilmes
AAML
59
1
0
25 Nov 2023
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
  Learning
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning
Siyuan Liang
Mingli Zhu
Aishan Liu
Baoyuan Wu
Xiaochun Cao
Ee-Chien Chang
32
50
0
20 Nov 2023
Meta-Adapter: An Online Few-shot Learner for Vision-Language Model
Meta-Adapter: An Online Few-shot Learner for Vision-Language Model
Cheng Cheng
Lin Song
Ruoyi Xue
Hang Wang
Hongbin Sun
Yixiao Ge
Ying Shan
VLM
ObjD
39
19
0
07 Nov 2023
On the Powerfulness of Textual Outlier Exposure for Visual OoD Detection
On the Powerfulness of Textual Outlier Exposure for Visual OoD Detection
Sangha Park
J. Mok
Dahuin Jung
Saehyung Lee
Sung-Hoon Yoon
24
10
0
25 Oct 2023
CAPIVARA: Cost-Efficient Approach for Improving Multilingual CLIP
  Performance on Low-Resource Languages
CAPIVARA: Cost-Efficient Approach for Improving Multilingual CLIP Performance on Low-Resource Languages
G. O. D. Santos
Diego A. B. Moreira
Alef Iury Ferreira
Jhessica Silva
Luiz Pereira
...
H. Maia
Nádia Da Silva
Esther Colombini
Hélio Pedrini
Sandra Avila
VLM
CLIP
34
4
0
20 Oct 2023
SILC: Improving Vision Language Pretraining with Self-Distillation
SILC: Improving Vision Language Pretraining with Self-Distillation
Muhammad Ferjad Naeem
Yongqin Xian
Xiaohua Zhai
Lukas Hoyer
Luc Van Gool
F. Tombari
VLM
28
33
0
20 Oct 2023
CXR-CLIP: Toward Large Scale Chest X-ray Language-Image Pre-training
CXR-CLIP: Toward Large Scale Chest X-ray Language-Image Pre-training
Kihyun You
Jawook Gu
Jiyeon Ham
Beomhee Park
Jiho Kim
Eun K. Hong
Woonhyuk Baek
Byungseok Roh
CLIP
VLM
26
59
0
20 Oct 2023
MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and
  Uni-Modal Adapter
MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter
Zhiyuan Liu
Sihang Li
Yancheng Luo
Hao Fei
Yixin Cao
Kenji Kawaguchi
Xiang Wang
Tat-Seng Chua
30
81
0
19 Oct 2023
Large Models for Time Series and Spatio-Temporal Data: A Survey and
  Outlook
Large Models for Time Series and Spatio-Temporal Data: A Survey and Outlook
Ming Jin
Qingsong Wen
Keli Zhang
Chaoli Zhang
Siqiao Xue
...
Shirui Pan
Vincent S. Tseng
Yu Zheng
Lei Chen
Hui Xiong
AI4TS
SyDa
35
118
0
16 Oct 2023
Leveraging Vision-Language Models for Improving Domain Generalization in
  Image Classification
Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification
Sravanti Addepalli
Ashish Ramayee Asokan
Lakshay Sharma
R. V. Babu
VLM
24
15
0
12 Oct 2023
Improving Compositional Text-to-image Generation with Large
  Vision-Language Models
Improving Compositional Text-to-image Generation with Large Vision-Language Models
Song Wen
Guian Fang
Renrui Zhang
Peng Gao
Hao Dong
Dimitris N. Metaxas
25
17
0
10 Oct 2023
Analyzing Zero-Shot Abilities of Vision-Language Models on Video
  Understanding Tasks
Analyzing Zero-Shot Abilities of Vision-Language Models on Video Understanding Tasks
Avinash Madasu
Anahita Bhiwandiwalla
Vasudev Lal
VLM
37
0
0
07 Oct 2023
Better Safe than Sorry: Pre-training CLIP against Targeted Data
  Poisoning and Backdoor Attacks
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks
Wenhan Yang
Jingdong Gao
Baharan Mirzasoleiman
VLM
26
6
0
05 Oct 2023
Understanding Transferable Representation Learning and Zero-shot
  Transfer in CLIP
Understanding Transferable Representation Learning and Zero-shot Transfer in CLIP
Zixiang Chen
Yihe Deng
Yuanzhi Li
Quanquan Gu
VLM
28
11
0
02 Oct 2023
Text-image Alignment for Diffusion-based Perception
Text-image Alignment for Diffusion-based Perception
Neehar Kondapaneni
Markus Marks
Manuel Knott
Rogério Guimarães
Pietro Perona
VLM
DiffM
24
32
0
29 Sep 2023
The Devil is in the Details: A Deep Dive into the Rabbit Hole of Data
  Filtering
The Devil is in the Details: A Deep Dive into the Rabbit Hole of Data Filtering
Hai-ping Yu
Yu Tian
Sateesh Kumar
Linjie Yang
Heng Wang
VLM
32
17
0
27 Sep 2023
Object-Centric Open-Vocabulary Image-Retrieval with Aggregated Features
Object-Centric Open-Vocabulary Image-Retrieval with Aggregated Features
Hila Levi
Guy Heller
Dan Levi
Ethan Fetaya
OCL
VLM
27
3
0
26 Sep 2023
Rewrite Caption Semantics: Bridging Semantic Gaps for
  Language-Supervised Semantic Segmentation
Rewrite Caption Semantics: Bridging Semantic Gaps for Language-Supervised Semantic Segmentation
Yun Xing
Jian Kang
Aoran Xiao
Jiahao Nie
Ling Shao
Shijian Lu
VLM
38
12
0
24 Sep 2023
Synthetic Boost: Leveraging Synthetic Data for Enhanced Vision-Language
  Segmentation in Echocardiography
Synthetic Boost: Leveraging Synthetic Data for Enhanced Vision-Language Segmentation in Echocardiography
Rabin Adhikari
Manish Dhakal
Safal Thapaliya
K. Poudel
Prasiddha Bhandari
Bishesh Khanal
25
7
0
22 Sep 2023
TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight
  Inheritance
TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance
Kan Wu
Houwen Peng
Zhenghong Zhou
Bin Xiao
Mengchen Liu
...
Xi
Xi Chen
Xinggang Wang
Hongyang Chao
Han Hu
VLM
OODD
29
53
0
21 Sep 2023
StructChart: Perception, Structuring, Reasoning for Visual Chart
  Understanding
StructChart: Perception, Structuring, Reasoning for Visual Chart Understanding
Renqiu Xia
Bo-Wen Zhang
Hao Peng
Hancheng Ye
Xiangchao Yan
Peng Ye
Botian Shi
Yu Qiao
Junchi Yan
19
0
0
20 Sep 2023
Sound Source Localization is All about Cross-Modal Alignment
Sound Source Localization is All about Cross-Modal Alignment
Arda Senocak
H. Ryu
Junsik Kim
Tae-Hyun Oh
Hanspeter Pfister
Joon Son Chung
36
18
0
19 Sep 2023
PRE: Vision-Language Prompt Learning with Reparameterization Encoder
PRE: Vision-Language Prompt Learning with Reparameterization Encoder
Anh Pham Thi Minh
An Duc Nguyen
Georgios Tzimiropoulos
VPVLM
VLM
25
3
0
14 Sep 2023
TAP: Targeted Prompting for Task Adaptive Generation of Textual Training
  Instances for Visual Classification
TAP: Targeted Prompting for Task Adaptive Generation of Textual Training Instances for Visual Classification
M. Jehanzeb Mirza
Leonid Karlinsky
Wei Lin
Horst Possegger
Rogerio Feris
Horst Bischof
VLM
40
6
0
13 Sep 2023
Previous
1234567
Next