ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.08682
  4. Cited By
Parameter-Efficient Tuning on Layer Normalization for Pre-trained
  Language Models

Parameter-Efficient Tuning on Layer Normalization for Pre-trained Language Models

16 November 2022
Wang Qi
Yu-Ping Ruan
Y. Zuo
Taihao Li
ArXivPDFHTML

Papers citing "Parameter-Efficient Tuning on Layer Normalization for Pre-trained Language Models"

14 / 14 papers shown
Title
UP-Person: Unified Parameter-Efficient Transfer Learning for Text-based Person Retrieval
UP-Person: Unified Parameter-Efficient Transfer Learning for Text-based Person Retrieval
Yating Liu
Yaowei Li
Xiangyuan Lan
Wenming Yang
Zimo Liu
Q. Liao
34
0
0
14 Apr 2025
Unlocking the Hidden Potential of CLIP in Generalizable Deepfake Detection
Unlocking the Hidden Potential of CLIP in Generalizable Deepfake Detection
Andrii Yermakov
Jan Cech
Jiri Matas
AAML
CLIP
58
1
0
25 Mar 2025
MoLEx: Mixture of Layer Experts for Finetuning with Sparse Upcycling
R. Teo
T. Nguyen
MoE
60
2
0
14 Mar 2025
iConFormer: Dynamic Parameter-Efficient Tuning with Input-Conditioned Adaptation
iConFormer: Dynamic Parameter-Efficient Tuning with Input-Conditioned Adaptation
Hayeon Jo
Hyesong Choi
Minhee Cho
Dongbo Min
41
1
0
04 Sep 2024
Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion
  Forecasting Models
Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting Models
Jifeng Wang
Kaouther Messaoud
Yuejiang Liu
Juergen Gall
Alexandre Alahi
42
0
0
28 Jul 2024
Missing Modality Prediction for Unpaired Multimodal Learning via Joint
  Embedding of Unimodal Models
Missing Modality Prediction for Unpaired Multimodal Learning via Joint Embedding of Unimodal Models
Donggeun Kim
Taesup Kim
31
4
0
17 Jul 2024
Repurposing Language Models into Embedding Models: Finding the
  Compute-Optimal Recipe
Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe
Alicja Ziarko
Albert Q. Jiang
Bartosz Piotrowski
Wenda Li
M. Jamnik
Piotr Miłoś
37
0
0
06 Jun 2024
Context-PEFT: Efficient Multi-Modal, Multi-Task Fine-Tuning
Context-PEFT: Efficient Multi-Modal, Multi-Task Fine-Tuning
Avelina Asada Hadji-Kyriacou
Ognjen Arandjelović
29
0
0
14 Dec 2023
CLAMP: Contrastive LAnguage Model Prompt-tuning
CLAMP: Contrastive LAnguage Model Prompt-tuning
Piotr Teterwak
Ximeng Sun
Bryan A. Plummer
Kate Saenko
Ser-Nam Lim
MLLM
VLM
40
1
0
04 Dec 2023
MetaWeather: Few-Shot Weather-Degraded Image Restoration
MetaWeather: Few-Shot Weather-Degraded Image Restoration
Youngrae Kim
Younggeol Cho
Thanh-Tung Nguyen
Seunghoon Hong
Dongman Lee
27
0
0
28 Aug 2023
Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free
  Continual Learning
Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning
Filip Szatkowski
Mateusz Pyla
Marcin Przewiȩźlikowski
Sebastian Cygert
Bartlomiej Twardowski
Tomasz Trzciñski
CLL
32
10
0
18 Aug 2023
PEFT-Ref: A Modular Reference Architecture and Typology for
  Parameter-Efficient Finetuning Techniques
PEFT-Ref: A Modular Reference Architecture and Typology for Parameter-Efficient Finetuning Techniques
Mohammed Sabry
Anya Belz
38
8
0
24 Apr 2023
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,452
0
18 Mar 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1