ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.05592
  4. Cited By
Patching open-vocabulary models by interpolating weights
v1v2 (latest)

Patching open-vocabulary models by interpolating weights

10 August 2022
Gabriel Ilharco
Mitchell Wortsman
S. Gadre
Shuran Song
Hannaneh Hajishirzi
Simon Kornblith
Ali Farhadi
Ludwig Schmidt
    VLMKELM
ArXiv (abs)PDFHTML

Papers citing "Patching open-vocabulary models by interpolating weights"

50 / 133 papers shown
Title
No Filter: Cultural and Socioeconomic Diversity in Contrastive
  Vision-Language Models
No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models
Angeline Pouget
Lucas Beyer
Emanuele Bugliarello
Xiao Wang
Andreas Steiner
Xiao-Qi Zhai
Ibrahim Alabdulmohsin
VLM
84
9
0
22 May 2024
Learning More Generalized Experts by Merging Experts in
  Mixture-of-Experts
Learning More Generalized Experts by Merging Experts in Mixture-of-Experts
Sejik Park
FedMLCLLMoMe
78
5
0
19 May 2024
The impact of Compositionality in Zero-shot Multi-label action
  recognition for Object-based tasks
The impact of Compositionality in Zero-shot Multi-label action recognition for Object-based tasks
Carmela Calabrese
Stefano Berti
Giulia Pasquale
Lorenzo Natale
VLM
81
0
0
14 May 2024
Localizing Task Information for Improved Model Merging and Compression
Localizing Task Information for Improved Model Merging and Compression
Ke Wang
Nikolaos Dimitriadis
Guillermo Ortiz-Jimenez
Franccois Fleuret
Pascal Frossard
MoMe
91
60
0
13 May 2024
IMWA: Iterative Model Weight Averaging Benefits Class-Imbalanced
  Learning Tasks
IMWA: Iterative Model Weight Averaging Benefits Class-Imbalanced Learning Tasks
Zitong Huang
Ze Chen
Bowen Dong
Chaoqi Liang
Erjin Zhou
Wangmeng Zuo
MoMeCLL
89
3
0
25 Apr 2024
Watch Your Step: Optimal Retrieval for Continual Learning at Scale
Watch Your Step: Optimal Retrieval for Continual Learning at Scale
Truman Hickok
Dhireesha Kudithipudi
69
1
0
16 Apr 2024
Post-Hoc Reversal: Are We Selecting Models Prematurely?
Post-Hoc Reversal: Are We Selecting Models Prematurely?
Rishabh Ranjan
Saurabh Garg
Mrigank Raman
Carlos Guestrin
Zachary Chase Lipton
72
0
0
11 Apr 2024
Prompt Learning via Meta-Regularization
Prompt Learning via Meta-Regularization
Jinyoung Park
Juyeon Ko
Hyunwoo J. Kim
VLMVPVLM
98
19
0
01 Apr 2024
Generalizable and Stable Finetuning of Pretrained Language Models on
  Low-Resource Texts
Generalizable and Stable Finetuning of Pretrained Language Models on Low-Resource Texts
Sai Ashish Somayajula
Youwei Liang
Abhishek Singh
Li Zhang
Pengtao Xie
80
2
0
19 Mar 2024
DAM: Dynamic Adapter Merging for Continual Video QA Learning
DAM: Dynamic Adapter Merging for Continual Video QA Learning
Feng Cheng
Ziyang Wang
Yi-Lin Sung
Yan-Bo Lin
Mohit Bansal
Gedas Bertasius
CLLMoMe
90
11
0
13 Mar 2024
Unleashing the Power of Meta-tuning for Few-shot Generalization Through
  Sparse Interpolated Experts
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts
Shengzhuang Chen
Jihoon Tack
Yunqiao Yang
Yee Whye Teh
Jonathan Richard Schwarz
Ying Wei
MoE
123
4
0
13 Mar 2024
CLoVe: Encoding Compositional Language in Contrastive Vision-Language
  Models
CLoVe: Encoding Compositional Language in Contrastive Vision-Language Models
Santiago Castro
Amir Ziai
Avneesh Saluja
Zhuoning Yuan
Rada Mihalcea
MLLMCoGeVLM
80
6
0
22 Feb 2024
Multilinear Mixture of Experts: Scalable Expert Specialization through
  Factorization
Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization
James Oldfield
Markos Georgopoulos
Grigorios G. Chrysos
Christos Tzelepis
Yannis Panagakis
M. Nicolaou
Jiankang Deng
Ioannis Patras
MoE
122
10
0
19 Feb 2024
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned
  Language Models through Task Arithmetic
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic
Rishabh Bhardwaj
Do Duc Anh
Soujanya Poria
MoMe
110
48
0
19 Feb 2024
BioMistral: A Collection of Open-Source Pretrained Large Language Models
  for Medical Domains
BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
Yanis Labrak
Adrien Bazoge
Emmanuel Morin
P. Gourraud
Mickael Rouvier
Richard Dufour
220
227
0
15 Feb 2024
Towards Safer Large Language Models through Machine Unlearning
Towards Safer Large Language Models through Machine Unlearning
Zheyuan Liu
Guangyao Dou
Zhaoxuan Tan
Yijun Tian
Meng Jiang
KELMMU
119
87
0
15 Feb 2024
Mind the Modality Gap: Towards a Remote Sensing Vision-Language Model
  via Cross-modal Alignment
Mind the Modality Gap: Towards a Remote Sensing Vision-Language Model via Cross-modal Alignment
Angelos Zavras
Dimitrios Michail
Begüm Demir
Ioannis Papoutsis
VLM
99
13
0
15 Feb 2024
On the Emergence of Cross-Task Linearity in the Pretraining-Finetuning
  Paradigm
On the Emergence of Cross-Task Linearity in the Pretraining-Finetuning Paradigm
Zhanpeng Zhou
Zijun Chen
Yilan Chen
Bo Zhang
Junchi Yan
MoMe
102
11
0
06 Feb 2024
FROSTER: Frozen CLIP Is A Strong Teacher for Open-Vocabulary Action
  Recognition
FROSTER: Frozen CLIP Is A Strong Teacher for Open-Vocabulary Action Recognition
Xiaohui Huang
Hao Zhou
Kun Yao
Kai Han
VLM
97
23
0
05 Feb 2024
WARM: On the Benefits of Weight Averaged Reward Models
WARM: On the Benefits of Weight Averaged Reward Models
Alexandre Ramé
Nino Vieillard
Léonard Hussenot
Robert Dadashi
Geoffrey Cideron
Olivier Bachem
Johan Ferret
179
104
0
22 Jan 2024
Time is Encoded in the Weights of Finetuned Language Models
Time is Encoded in the Weights of Finetuned Language Models
Kai Nylund
Suchin Gururangan
Noah A. Smith
AI4TS
155
26
0
20 Dec 2023
Weighted Ensemble Models Are Strong Continual Learners
Weighted Ensemble Models Are Strong Continual Learners
Imad Eddine Marouf
Subhankar Roy
Enzo Tartaglione
Stéphane Lathuilière
CLL
109
22
0
14 Dec 2023
Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks
Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks
Mohammad-Javad Davari
Eugene Belilovsky
MoMe
99
71
0
11 Dec 2023
MAFA: Managing False Negatives for Vision-Language Pre-training
MAFA: Managing False Negatives for Vision-Language Pre-training
Jaeseok Byun
Dohoon Kim
Taesup Moon
VLM
81
6
0
11 Dec 2023
OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for
  General Video Recognition
OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition
Tom Tongjia Chen
Hongshan Yu
Zhengeng Yang
Zechuan Li
Wei Sun
Chen Chen
82
9
0
30 Nov 2023
LM-Cocktail: Resilient Tuning of Language Models via Model Merging
LM-Cocktail: Resilient Tuning of Language Models via Model Merging
Shitao Xiao
Zheng Liu
Peitian Zhang
Xingrun Xing
MoMeKELM
156
28
0
22 Nov 2023
ComPEFT: Compression for Communicating Parameter Efficient Updates via
  Sparsification and Quantization
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization
Prateek Yadav
Leshem Choshen
Colin Raffel
Mohit Bansal
89
14
0
22 Nov 2023
DiLoCo: Distributed Low-Communication Training of Language Models
DiLoCo: Distributed Low-Communication Training of Language Models
Arthur Douillard
Qixuang Feng
Andrei A. Rusu
Rachita Chhaparia
Yani Donchev
A. Kuncoro
MarcÁurelio Ranzato
Arthur Szlam
Jiajun Shen
146
40
0
14 Nov 2023
Holistic Transfer: Towards Non-Disruptive Fine-Tuning with Partial
  Target Data
Holistic Transfer: Towards Non-Disruptive Fine-Tuning with Partial Target Data
Cheng-Hao Tu
Hong-You Chen
Zheda Mai
Shitian Zhao
Vardaan Pahuja
Tanya Berger-Wolf
Song Gao
Charles V. Stewart
Yu-Chuan Su
Wei-Lun Chao
CLL
84
6
0
02 Nov 2023
TiC-CLIP: Continual Training of CLIP Models
TiC-CLIP: Continual Training of CLIP Models
Saurabh Garg
Mehrdad Farajtabar
Hadi Pouransari
Raviteja Vemulapalli
Sachin Mehta
Oncel Tuzel
Vaishaal Shankar
Fartash Faghri
VLMCLIP
121
31
0
24 Oct 2023
SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial
  Understanding
SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Haoxiang Wang
Pavan Kumar Anasosalu Vasu
Fartash Faghri
Raviteja Vemulapalli
Mehrdad Farajtabar
Sachin Mehta
Mohammad Rastegari
Oncel Tuzel
Hadi Pouransari
VLM
128
72
0
23 Oct 2023
Large Language Models are Visual Reasoning Coordinators
Large Language Models are Visual Reasoning Coordinators
Liangyu Chen
Bo Li
Sheng Shen
Jingkang Yang
Chunyuan Li
Kurt Keutzer
Trevor Darrell
Ziwei Liu
VLMLRM
130
58
0
23 Oct 2023
Building an Open-Vocabulary Video CLIP Model with Better Architectures,
  Optimization and Data
Building an Open-Vocabulary Video CLIP Model with Better Architectures, Optimization and Data
Zuxuan Wu
Zejia Weng
Wujian Peng
Xitong Yang
Ang Li
Larry S. Davis
Yu-Gang Jiang
CLIPVLM
90
22
0
08 Oct 2023
Parameter Efficient Multi-task Model Fusion with Partial Linearization
Parameter Efficient Multi-task Model Fusion with Partial Linearization
Anke Tang
Li Shen
Yong Luo
Yibing Zhan
Han Hu
Bo Du
Yixin Chen
Dacheng Tao
MoMe
122
36
0
07 Oct 2023
Profit: Benchmarking Personalization and Robustness Trade-off in
  Federated Prompt Tuning
Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning
Liam Collins
Shanshan Wu
Sewoong Oh
K. Sim
FedML
94
10
0
06 Oct 2023
Merge, Then Compress: Demystify Efficient SMoE with Hints from Its
  Routing Policy
Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy
Pingzhi Li
Zhenyu Zhang
Prateek Yadav
Yi-Lin Sung
Yu Cheng
Mohit Bansal
Tianlong Chen
MoMe
80
39
0
02 Oct 2023
Deep Model Fusion: A Survey
Deep Model Fusion: A Survey
Weishi Li
Yong Peng
Miao Zhang
Liang Ding
Han Hu
Li Shen
FedMLMoMe
115
62
0
27 Sep 2023
Cordyceps@LT-EDI: Patching Language-Specific Homophobia/Transphobia
  Classifiers with a Multilingual Understanding
Cordyceps@LT-EDI: Patching Language-Specific Homophobia/Transphobia Classifiers with a Multilingual Understanding
Dean Ninalga
74
2
0
24 Sep 2023
Fine-tuning can cripple your foundation model; preserving features may
  be the solution
Fine-tuning can cripple your foundation model; preserving features may be the solution
Jishnu Mukhoti
Y. Gal
Philip Torr
P. Dokania
CLL
127
47
0
25 Aug 2023
UnIVAL: Unified Model for Image, Video, Audio and Language Tasks
UnIVAL: Unified Model for Image, Video, Audio and Language Tasks
Mustafa Shukor
Corentin Dancette
Alexandre Ramé
Matthieu Cord
MoMeMLLM
124
46
0
30 Jul 2023
FedSoup: Improving Generalization and Personalization in Federated
  Learning via Selective Model Interpolation
FedSoup: Improving Generalization and Personalization in Federated Learning via Selective Model Interpolation
Minghui Chen
Meirui Jiang
Qianming Dou
Zehua Wang
Xiaoxiao Li
FedML
83
16
0
20 Jul 2023
Tangent Model Composition for Ensembling and Continual Fine-tuning
Tangent Model Composition for Ensembling and Continual Fine-tuning
Tianlin Liu
Stefano Soatto
LRMMoMeCLL
84
17
0
16 Jul 2023
Self-regulating Prompts: Foundational Model Adaptation without
  Forgetting
Self-regulating Prompts: Foundational Model Adaptation without Forgetting
Muhammad Uzair Khattak
Syed Talal Wasim
Muzammal Naseer
Salman Khan
Ming-Hsuan Yang
Fahad Shahbaz Khan
VLM
93
189
0
13 Jul 2023
Derivative Free Weight-space Ensembling
Derivative Free Weight-space Ensembling
Dean Ninalga
MoMe
83
0
0
07 Jul 2023
ProbVLM: Probabilistic Adapter for Frozen Vision-Language Models
ProbVLM: Probabilistic Adapter for Frozen Vision-Language Models
Uddeshya Upadhyay
Shyamgopal Karthik
Massimiliano Mancini
Zeynep Akata
MLLMVLM
84
4
0
01 Jul 2023
Graph Ladling: Shockingly Simple Parallel GNN Training without
  Intermediate Communication
Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication
A. Jaiswal
Shiwei Liu
Tianlong Chen
Ying Ding
Zhangyang Wang
GNN
107
7
0
18 Jun 2023
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery
  Tickets from Large Models
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models
A. Jaiswal
Shiwei Liu
Tianlong Chen
Ying Ding
Zhangyang Wang
VLM
115
21
0
18 Jun 2023
Rewarded soups: towards Pareto-optimal alignment by interpolating
  weights fine-tuned on diverse rewards
Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards
Alexandre Ramé
Guillaume Couairon
Mustafa Shukor
Corentin Dancette
Jean-Baptiste Gaya
Laure Soulier
Matthieu Cord
MoMe
120
157
0
07 Jun 2023
Soft Merging of Experts with Adaptive Routing
Soft Merging of Experts with Adaptive Routing
Mohammed Muqeeth
Haokun Liu
Colin Raffel
MoMeMoE
105
54
0
06 Jun 2023
TIES-Merging: Resolving Interference When Merging Models
TIES-Merging: Resolving Interference When Merging Models
Prateek Yadav
Derek Tam
Leshem Choshen
Colin Raffel
Joey Tianyi Zhou
MoMe
143
318
0
02 Jun 2023
Previous
123
Next