ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.01541
  4. Cited By
Efficient Fine-Tuning of BERT Models on the Edge

Efficient Fine-Tuning of BERT Models on the Edge

3 May 2022
Danilo Vucetic
Mohammadreza Tayaranian
M. Ziaeefard
J. Clark
B. Meyer
W. Gross
ArXiv (abs)PDFHTML

Papers citing "Efficient Fine-Tuning of BERT Models on the Edge"

24 / 24 papers shown
Title
Look Within or Look Beyond? A Theoretical Comparison Between Parameter-Efficient and Full Fine-Tuning
Look Within or Look Beyond? A Theoretical Comparison Between Parameter-Efficient and Full Fine-Tuning
Yongkang Liu
Xingle Xu
Ercong Nie
Zijing Wang
Shi Feng
Daling Wang
Qian Li
Hinrich Schutze
39
0
0
28 May 2025
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
Yibo Zhong
Haoxiang Jiang
Lincan Li
Ryumei Nakada
Tianci Liu
Linjun Zhang
Huaxiu Yao
Haoyu Wang
257
3
0
24 Feb 2025
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Liwen Wang
Sheng Chen
Linnan Jiang
Shu Pan
Runze Cai
Sen Yang
Fei Yang
187
7
0
24 Oct 2024
HUT: A More Computation Efficient Fine-Tuning Method With Hadamard
  Updated Transformation
HUT: A More Computation Efficient Fine-Tuning Method With Hadamard Updated Transformation
Geyuan Zhang
Xiaofei Zhou
Chuheng Chen
38
0
0
20 Sep 2024
AI Act for the Working Programmer
AI Act for the Working Programmer
Holger Hermanns
Anne Lauber-Rönsberg
Philip Meinel
Sarah Sterz
Hanwei Zhang
76
1
0
23 Jul 2024
Foundation Model Engineering: Engineering Foundation Models Just as
  Engineering Software
Foundation Model Engineering: Engineering Foundation Models Just as Engineering Software
Dezhi Ran
Mengzhou Wu
Wei Yang
Tao Xie
AI4CE
81
2
0
11 Jul 2024
Let the Expert Stick to His Last: Expert-Specialized Fine-Tuning for
  Sparse Architectural Large Language Models
Let the Expert Stick to His Last: Expert-Specialized Fine-Tuning for Sparse Architectural Large Language Models
Zihan Wang
Deli Chen
Damai Dai
Runxin Xu
Zhuoshu Li
Y. Wu
MoEALM
79
3
0
02 Jul 2024
MEFT: Memory-Efficient Fine-Tuning through Sparse Adapter
MEFT: Memory-Efficient Fine-Tuning through Sparse Adapter
Jitai Hao
Weiwei Sun
Xin Xin
Qi Meng
Zhumin Chen
Fajie Yuan
Zhaochun Ren
MoE
81
5
0
07 Jun 2024
DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank
  Distribution
DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank Distribution
Yulong Mao
Kaiyu Huang
Changhao Guan
Ganglin Bao
Fengran Mo
Jinan Xu
96
17
0
27 May 2024
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
313
405
0
21 Mar 2024
Training Machine Learning models at the Edge: A Survey
Training Machine Learning models at the Edge: A Survey
Aymen Rayane Khouas
Mohamed Reda Bouadjenek
Hakim Hacid
Sunil Aryal
113
12
0
05 Mar 2024
Large Language Model for Mental Health: A Systematic Review
Large Language Model for Mental Health: A Systematic Review
Zhijun Guo
A. Lai
Johan H Thygesen
Joseph Farrington
Thomas Keen
Kezhi Li
LM&MAAI4MH
80
17
0
19 Feb 2024
CERM: Context-aware Literature-based Discovery via Sentiment Analysis
CERM: Context-aware Literature-based Discovery via Sentiment Analysis
Julio Christian Young
Uchenna Akujuobi
56
2
0
27 Jan 2024
HiFT: A Hierarchical Full Parameter Fine-Tuning Strategy
HiFT: A Hierarchical Full Parameter Fine-Tuning Strategy
Yongkang Liu
Yiqun Zhang
Qian Li
Tong Liu
Shi Feng
Daling Wang
Yifei Zhang
Hinrich Schütze
87
9
0
26 Jan 2024
Dynamic Layer Tying for Parameter-Efficient Transformers
Dynamic Layer Tying for Parameter-Efficient Transformers
Tamir David Hay
Lior Wolf
75
3
0
23 Jan 2024
PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation
PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation
Nadav Benedek
Lior Wolf
87
5
0
20 Jan 2024
PersianMind: A Cross-Lingual Persian-English Large Language Model
PersianMind: A Cross-Lingual Persian-English Large Language Model
Pedram Rostami
Ali Salemi
M. Dousti
CLLLRM
56
5
0
12 Jan 2024
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
Max Zimmer
Megi Andoni
Christoph Spiegel
Sebastian Pokutta
VLM
183
10
0
23 Dec 2023
IncreLoRA: Incremental Parameter Allocation Method for
  Parameter-Efficient Fine-tuning
IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-tuning
Feiyu F. Zhang
Liangzhi Li
Jun-Cheng Chen
Zhouqian Jiang
Bowen Wang
Yiming Qian
97
38
0
23 Aug 2023
Sensitivity-Aware Finetuning for Accuracy Recovery on Deep Learning
  Hardware
Sensitivity-Aware Finetuning for Accuracy Recovery on Deep Learning Hardware
Lakshmi Nair
D. Bunandar
57
0
0
05 Jun 2023
Subspace-Configurable Networks
Subspace-Configurable Networks
Dong Wang
O. Saukh
Xiaoxi He
Lothar Thiele
OOD
99
0
0
22 May 2023
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Vladislav Lialin
Vijeta Deshpande
Anna Rumshisky
104
179
0
28 Mar 2023
Clinical BioBERT Hyperparameter Optimization using Genetic Algorithm
Clinical BioBERT Hyperparameter Optimization using Genetic Algorithm
N. Kollapally
J. Geller
29
2
0
08 Feb 2023
Efficient Fine-Tuning of Compressed Language Models with Learners
Efficient Fine-Tuning of Compressed Language Models with Learners
Danilo Vucetic
Mohammadreza Tayaranian
M. Ziaeefard
J. Clark
B. Meyer
W. Gross
58
2
0
03 Aug 2022
1