ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.04550
  4. Cited By
Module-wise Adaptive Distillation for Multimodality Foundation Models

Module-wise Adaptive Distillation for Multimodality Foundation Models

6 October 2023
Chen Liang
Jiahui Yu
Ming-Hsuan Yang
Matthew A. Brown
Huayu Chen
Tuo Zhao
Boqing Gong
Tianyi Zhou
ArXivPDFHTML

Papers citing "Module-wise Adaptive Distillation for Multimodality Foundation Models"

10 / 10 papers shown
Title
CLIP-PING: Boosting Lightweight Vision-Language Models with Proximus Intrinsic Neighbors Guidance
CLIP-PING: Boosting Lightweight Vision-Language Models with Proximus Intrinsic Neighbors Guidance
Chu Myaet Thwal
Ye Lin Tun
Minh N. H. Nguyen
Eui-nam Huh
Choong Seon Hong
VLM
74
0
0
05 Dec 2024
Applications of Knowledge Distillation in Remote Sensing: A Survey
Applications of Knowledge Distillation in Remote Sensing: A Survey
Yassine Himeur
N. Aburaed
O. Elharrouss
Iraklis Varlamis
Shadi Atalla
W. Mansoor
Hussain Al Ahmad
45
4
0
18 Sep 2024
Seeking the Sufficiency and Necessity Causal Features in Multimodal
  Representation Learning
Seeking the Sufficiency and Necessity Causal Features in Multimodal Representation Learning
Boyu Chen
Junjie Liu
Zhu Li
Mengyue yang
35
1
0
29 Aug 2024
Dinomaly: The Less Is More Philosophy in Multi-Class Unsupervised Anomaly Detection
Dinomaly: The Less Is More Philosophy in Multi-Class Unsupervised Anomaly Detection
Jia Guo
Shuai Lu
Weihang Zhang
Huiqi Li
Huiqi Li
Hongen Liao
ViT
69
8
0
23 May 2024
m2mKD: Module-to-Module Knowledge Distillation for Modular Transformers
m2mKD: Module-to-Module Knowledge Distillation for Modular Transformers
Ka Man Lo
Yiming Liang
Wenyu Du
Yuantao Fan
Zili Wang
Wenhao Huang
Lei Ma
Jie Fu
MoE
42
2
0
26 Feb 2024
Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld
Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld
Yijun Yang
Tianyi Zhou
Kanxue Li
Dapeng Tao
Lusong Li
Li Shen
Xiaodong He
Jing Jiang
Yuhui Shi
LLMAG
LM&Ro
30
35
0
28 Nov 2023
HallusionBench: An Advanced Diagnostic Suite for Entangled Language
  Hallucination and Visual Illusion in Large Vision-Language Models
HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models
Tianrui Guan
Fuxiao Liu
Xiyang Wu
Ruiqi Xian
Zongxia Li
...
Lichang Chen
Furong Huang
Yaser Yacoob
Dinesh Manocha
Dinesh Manocha
VLM
MLLM
42
156
0
23 Oct 2023
CLIP-KD: An Empirical Study of CLIP Model Distillation
CLIP-KD: An Empirical Study of CLIP Model Distillation
Chuanguang Yang
Zhulin An
Libo Huang
Junyu Bi
Xinqiang Yu
Hansheng Yang
Boyu Diao
Yongjun Xu
VLM
29
27
0
24 Jul 2023
How Much Can CLIP Benefit Vision-and-Language Tasks?
How Much Can CLIP Benefit Vision-and-Language Tasks?
Sheng Shen
Liunian Harold Li
Hao Tan
Joey Tianyi Zhou
Anna Rohrbach
Kai-Wei Chang
Z. Yao
Kurt Keutzer
CLIP
VLM
MLLM
202
405
0
13 Jul 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy
  Text Supervision
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
334
3,708
0
11 Feb 2021
1