ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.17451
  4. Cited By

AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning

31 October 2022
Yaqing Wang
Sahaj Agarwal
Subhabrata Mukherjee
Xiaodong Liu
Jing Gao
Ahmed Hassan Awadallah
Jianfeng Gao
    MoE
ArXivPDFHTML

Papers citing "AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning"

50 / 86 papers shown
Title
Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Zehua Liu
Han Wu
Ruifeng She
Xiaojin Fu
Xiongwei Han
Tao Zhong
Mingxuan Yuan
MoE
45
0
0
29 Mar 2025
Efficient Adapter Tuning for Joint Singing Voice Beat and Downbeat Tracking with Self-supervised Learning Features
Jiajun Deng
Yaolong Ju
Jing Yang
Simon Lui
Xunying Liu
48
0
0
13 Mar 2025
LoR2C : Low-Rank Residual Connection Adaptation for Parameter-Efficient Fine-Tuning
Jiancheng Zhao
Xingda Yu
Yuxiang Zhang
Zhen Yang
OffRL
31
0
0
01 Mar 2025
Mixture of insighTful Experts (MoTE): The Synergy of Thought Chains and Expert Mixtures in Self-Alignment
Mixture of insighTful Experts (MoTE): The Synergy of Thought Chains and Expert Mixtures in Self-Alignment
Zhili Liu
Yunhao Gou
Kai Chen
Lanqing Hong
Jiahui Gao
...
Yu Zhang
Zhenguo Li
Xin Jiang
Q. Liu
James T. Kwok
MoE
96
9
0
20 Feb 2025
A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models
A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models
Mengyang Sun
Yihao Wang
Tao Feng
Dan Zhang
Yifan Zhu
J. Tang
MoE
31
0
0
20 Feb 2025
Language Fusion for Parameter-Efficient Cross-lingual Transfer
Language Fusion for Parameter-Efficient Cross-lingual Transfer
Philipp Borchert
Ivan Vulić
Marie-Francine Moens
Jochen De Weerdt
36
0
0
12 Jan 2025
Aggregating Low Rank Adapters in Federated Fine-tuning
Aggregating Low Rank Adapters in Federated Fine-tuning
Evelyn Trautmann
Ian Hales
Martin F. Volk
AI4CE
FedML
39
0
0
10 Jan 2025
Investigating Mixture of Experts in Dense Retrieval
Investigating Mixture of Experts in Dense Retrieval
Effrosyni Sokli
Pranav Kasela
Georgios Peikos
G. Pasi
MoE
72
1
0
16 Dec 2024
Enhancing Trust in Large Language Models with Uncertainty-Aware
  Fine-Tuning
Enhancing Trust in Large Language Models with Uncertainty-Aware Fine-Tuning
R. Krishnan
Piyush Khanna
Omesh Tickoo
HILM
69
1
0
03 Dec 2024
Efficient and Private: Memorisation under differentially private
  parameter-efficient fine-tuning in language models
Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models
Olivia Ma
Jonathan Passerat-Palmbach
Dmitrii Usynin
75
0
0
24 Nov 2024
Pin-Tuning: Parameter-Efficient In-Context Tuning for Few-Shot Molecular
  Property Prediction
Pin-Tuning: Parameter-Efficient In-Context Tuning for Few-Shot Molecular Property Prediction
Liang Wang
Qiang Liu
Shaozhen Liu
Xin Sun
Shu Wu
Liang Wang
37
2
0
02 Nov 2024
CleaR: Towards Robust and Generalized Parameter-Efficient Fine-Tuning
  for Noisy Label Learning
CleaR: Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning
Yeachan Kim
Junho Kim
SangKeun Lee
NoLa
AAML
35
2
0
31 Oct 2024
MoTE: Reconciling Generalization with Specialization for Visual-Language
  to Video Knowledge Transfer
MoTE: Reconciling Generalization with Specialization for Visual-Language to Video Knowledge Transfer
Minghao Zhu
Zhengpu Wang
Mengxian Hu
Ronghao Dang
Xiao Lin
Xun Zhou
Chengju Liu
Qijun Chen
30
1
0
14 Oct 2024
Reward-RAG: Enhancing RAG with Reward Driven Supervision
Reward-RAG: Enhancing RAG with Reward Driven Supervision
Thang Nguyen
Peter Chin
Yu-Wing Tai
RALM
34
4
0
03 Oct 2024
LLM-based multi-agent poetry generation in non-cooperative environments
LLM-based multi-agent poetry generation in non-cooperative environments
Ran Zhang
Steffen Eger
LLMAG
31
5
0
05 Sep 2024
MergeRepair: An Exploratory Study on Merging Task-Specific Adapters in
  Code LLMs for Automated Program Repair
MergeRepair: An Exploratory Study on Merging Task-Specific Adapters in Code LLMs for Automated Program Repair
Meghdad Dehghan
Jie JW Wu
Fatemeh H. Fard
Ali Ouni
MoMe
42
2
0
18 Aug 2024
Learning to Route for Dynamic Adapter Composition in Continual Learning
  with Language Models
Learning to Route for Dynamic Adapter Composition in Continual Learning with Language Models
Vladimir Araujo
Marie-Francine Moens
Tinne Tuytelaars
CLL
MoMe
26
2
0
16 Aug 2024
CROME: Cross-Modal Adapters for Efficient Multimodal LLM
CROME: Cross-Modal Adapters for Efficient Multimodal LLM
Sayna Ebrahimi
Sercan Ö. Arik
Tejas Nama
Tomas Pfister
39
1
0
13 Aug 2024
MoDE: Effective Multi-task Parameter Efficient Fine-Tuning with a
  Mixture of Dyadic Experts
MoDE: Effective Multi-task Parameter Efficient Fine-Tuning with a Mixture of Dyadic Experts
Lin Ning
Harsh Lara
Meiqi Guo
Abhinav Rastogi
MoMe
MoE
29
1
0
02 Aug 2024
CELLM: An Efficient Communication in Large Language Models Training for
  Federated Learning
CELLM: An Efficient Communication in Large Language Models Training for Federated Learning
Raja Vavekanand
Kira Sam
40
0
0
30 Jul 2024
AIGC for Industrial Time Series: From Deep Generative Models to Large Generative Models
AIGC for Industrial Time Series: From Deep Generative Models to Large Generative Models
Lei Ren
Haiteng Wang
Yang Tang
Yang Tang
Chunhua Yang
AI4TS
AI4CE
49
5
0
16 Jul 2024
Diversifying the Expert Knowledge for Task-Agnostic Pruning in Sparse
  Mixture-of-Experts
Diversifying the Expert Knowledge for Task-Agnostic Pruning in Sparse Mixture-of-Experts
Zeliang Zhang
Xiaodong Liu
Hao Cheng
Chenliang Xu
Jianfeng Gao
MoE
32
9
0
12 Jul 2024
LoRA-GA: Low-Rank Adaptation with Gradient Approximation
LoRA-GA: Low-Rank Adaptation with Gradient Approximation
Shaowen Wang
Linxi Yu
Jian Li
ALM
AI4CE
26
27
0
06 Jul 2024
Mixture of A Million Experts
Mixture of A Million Experts
Xu Owen He
MoE
36
25
0
04 Jul 2024
Short-Long Policy Evaluation with Novel Actions
Short-Long Policy Evaluation with Novel Actions
Hyunji Alex Nam
Yash Chandak
Emma Brunskill
OffRL
14
0
0
04 Jul 2024
Lateralization LoRA: Interleaved Instruction Tuning with
  Modality-Specialized Adaptations
Lateralization LoRA: Interleaved Instruction Tuning with Modality-Specialized Adaptations
Zhiyang Xu
Minqian Liu
Ying Shen
Joy Rimchala
Jiaxin Zhang
Qifan Wang
Yu Cheng
Lifu Huang
VLM
37
2
0
04 Jul 2024
Lightweight Zero-shot Text-to-Speech with Mixture of Adapters
Lightweight Zero-shot Text-to-Speech with Mixture of Adapters
Kenichi Fujita
Takanori Ashihara
Marc Delcroix
Yusuke Ijima
35
2
0
01 Jul 2024
LEMoE: Advanced Mixture of Experts Adaptor for Lifelong Model Editing of
  Large Language Models
LEMoE: Advanced Mixture of Experts Adaptor for Lifelong Model Editing of Large Language Models
Renzhi Wang
Piji Li
KELM
CLL
42
7
0
28 Jun 2024
Structured Unrestricted-Rank Matrices for Parameter Efficient
  Fine-tuning
Structured Unrestricted-Rank Matrices for Parameter Efficient Fine-tuning
Arijit Sehanobish
Avinava Dubey
Krzysztof Choromanski
Somnath Basu Roy Chowdhury
Deepali Jain
Vikas Sindhwani
Snigdha Chaturvedi
ALM
35
1
0
25 Jun 2024
Retrieval-Augmented Mixture of LoRA Experts for Uploadable Machine
  Learning
Retrieval-Augmented Mixture of LoRA Experts for Uploadable Machine Learning
Ziyu Zhao
Leilei Gan
Guoyin Wang
Yuwei Hu
Tao Shen
Hongxia Yang
Kun Kuang
Fei Wu
MoE
MoMe
37
11
0
24 Jun 2024
Crayon: Customized On-Device LLM via Instant Adapter Blending and
  Edge-Server Hybrid Inference
Crayon: Customized On-Device LLM via Instant Adapter Blending and Edge-Server Hybrid Inference
Jihwan Bang
Juntae Lee
Kyuhong Shim
Seunghan Yang
Simyung Chang
29
5
0
11 Jun 2024
An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal
  Large Language Models
An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models
Xiongtao Zhou
Jie He
Yuhua Ke
Guangyao Zhu
Víctor Gutiérrez-Basulto
Jeff Z. Pan
40
11
0
07 Jun 2024
MEFT: Memory-Efficient Fine-Tuning through Sparse Adapter
MEFT: Memory-Efficient Fine-Tuning through Sparse Adapter
Jitai Hao
Weiwei Sun
Xin Xin
Qi Meng
Zhumin Chen
Pengjie Ren
Zhaochun Ren
MoE
26
2
0
07 Jun 2024
QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed
  Tensor Adaptation
QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation
Zhuo Chen
Rumen Dangovski
Charlotte Loh
Owen Dugan
Di Luo
Marin Soljacic
MQ
25
8
0
31 May 2024
RE-Adapt: Reverse Engineered Adaptation of Large Language Models
RE-Adapt: Reverse Engineered Adaptation of Large Language Models
William Fleshman
Benjamin Van Durme
VLM
27
3
0
23 May 2024
Towards Modular LLMs by Building and Reusing a Library of LoRAs
Towards Modular LLMs by Building and Reusing a Library of LoRAs
O. Ostapenko
Zhan Su
E. Ponti
Laurent Charlin
Nicolas Le Roux
Matheus Pereira
Lucas Page-Caccia
Alessandro Sordoni
MoMe
34
30
0
18 May 2024
A Survey on Transformers in NLP with Focus on Efficiency
A Survey on Transformers in NLP with Focus on Efficiency
Wazib Ansar
Saptarsi Goswami
Amlan Chakrabarti
MedIm
38
2
0
15 May 2024
AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees
AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees
William Fleshman
Aleem Khan
Marc Marone
Benjamin Van Durme
CLL
KELM
50
3
0
12 Apr 2024
Facial Affective Behavior Analysis with Instruction Tuning
Facial Affective Behavior Analysis with Instruction Tuning
Yifan Li
Anh Dao
Wentao Bao
Zhen Tan
Tianlong Chen
Huan Liu
Yu Kong
CVBM
53
15
0
07 Apr 2024
ReFT: Representation Finetuning for Language Models
ReFT: Representation Finetuning for Language Models
Zhengxuan Wu
Aryaman Arora
Zheng Wang
Atticus Geiger
Daniel Jurafsky
Christopher D. Manning
Christopher Potts
OffRL
30
33
0
04 Apr 2024
Self-Expansion of Pre-trained Models with Mixture of Adapters for Continual Learning
Self-Expansion of Pre-trained Models with Mixture of Adapters for Continual Learning
Huiyi Wang
Haodong Lu
Lina Yao
Dong Gong
KELM
CLL
40
8
0
27 Mar 2024
Dynamic Tuning Towards Parameter and Inference Efficiency for ViT
  Adaptation
Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation
Wangbo Zhao
Jiasheng Tang
Yizeng Han
Yibing Song
Kai Wang
Gao Huang
F. Wang
Yang You
35
11
0
18 Mar 2024
Introducing Routing Functions to Vision-Language Parameter-Efficient
  Fine-Tuning with Low-Rank Bottlenecks
Introducing Routing Functions to Vision-Language Parameter-Efficient Fine-Tuning with Low-Rank Bottlenecks
Tingyu Qu
Tinne Tuytelaars
Marie-Francine Moens
MoE
33
2
0
14 Mar 2024
Learning Intrinsic Dimension via Information Bottleneck for Explainable
  Aspect-based Sentiment Analysis
Learning Intrinsic Dimension via Information Bottleneck for Explainable Aspect-based Sentiment Analysis
Zhenxiao Cheng
Jie Zhou
Wen Wu
Qin Chen
Liang He
41
0
0
28 Feb 2024
ResLoRA: Identity Residual Mapping in Low-Rank Adaption
ResLoRA: Identity Residual Mapping in Low-Rank Adaption
Shuhua Shi
Shaohan Huang
Minghui Song
Zhoujun Li
Zihan Zhang
Haizhen Huang
Furu Wei
Weiwei Deng
Feng Sun
Qi Zhang
AI4CE
18
14
0
28 Feb 2024
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables
  Parameter-Efficient Transfer Learning
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning
Zhisheng Lin
Han Fu
Chenghao Liu
Zhuo Li
Jianling Sun
MoE
MoMe
28
5
0
23 Feb 2024
A Survey on Knowledge Distillation of Large Language Models
A Survey on Knowledge Distillation of Large Language Models
Xiaohan Xu
Ming Li
Chongyang Tao
Tao Shen
Reynold Cheng
Jinyang Li
Can Xu
Dacheng Tao
Tianyi Zhou
KELM
VLM
42
100
0
20 Feb 2024
LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative
  Tasks
LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative Tasks
Hanqing Wang
Bowen Ping
Shuo Wang
Xu Han
Yun-Nung Chen
Zhiyuan Liu
Maosong Sun
MoMe
25
17
0
18 Feb 2024
LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed
  Tasks in the Wild
LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed Tasks in the Wild
Ziyu Zhao
Leilei Gan
Guoyin Wang
Wangchunshu Zhou
Hongxia Yang
Kun Kuang
Fei Wu
MoMe
21
28
0
15 Feb 2024
Model Compression and Efficient Inference for Large Language Models: A
  Survey
Model Compression and Efficient Inference for Large Language Models: A Survey
Wenxiao Wang
Wei Chen
Yicong Luo
Yongliu Long
Zhengkai Lin
Liye Zhang
Binbin Lin
Deng Cai
Xiaofei He
MQ
36
47
0
15 Feb 2024
12
Next