Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2202.12024
Cited By
v1
v2 (latest)
NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better
24 February 2022
Chuhan Wu
Fangzhao Wu
Tao Qi
Yongfeng Huang
Xing Xie
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better"
29 / 29 papers shown
Title
Noisy Deep Ensemble: Accelerating Deep Ensemble Learning via Noise Injection
Shunsuke Sakai
Shunsuke Tsuge
Tatsuhito Hasegawa
44
0
0
08 Apr 2025
ProtoBERT-LoRA: Parameter-Efficient Prototypical Finetuning for Immunotherapy Study Identification
Shijia Zhang
Xiyu Ding
Kai Ding
Jacob Zhang
Kevin Galinsky
Mengrui Wang
Ryan P. Mayers
Zheyu Wang
Hadi Kharrazi
102
0
0
26 Mar 2025
HaLoRA: Hardware-aware Low-Rank Adaptation for Large Language Models Based on Hybrid Compute-in-Memory Architecture
Taiqiang Wu
Chenchen Ding
Wenyong Zhou
Yuxin Cheng
Xincheng Feng
Shuqi Wang
Chufan Shi
Ziyue Liu
Ngai Wong
138
0
0
27 Feb 2025
VideoLights: Feature Refinement and Cross-Task Alignment Transformer for Joint Video Highlight Detection and Moment Retrieval
Dhiman Paul
Md Rizwan Parvez
Nabeel Mohammed
Shafin Rahman
VGen
130
0
0
02 Dec 2024
Exploring Accuracy-Fairness Trade-off in Large Language Models
Qingquan Zhang
Qiqi Duan
Bo Yuan
Yuhui Shi
Qingbin Liu
125
0
0
21 Nov 2024
BiSSL: Enhancing the Alignment Between Self-Supervised Pretraining and Downstream Fine-Tuning via Bilevel Optimization
Gustav Wagner Zakarias
Lars Kai Hansen
Zheng-Hua Tan
81
0
0
03 Oct 2024
Audio-Guided Fusion Techniques for Multimodal Emotion Analysis
Pujin Shi
Fei Gao
95
1
0
08 Sep 2024
Light-weight Fine-tuning Method for Defending Adversarial Noise in Pre-trained Medical Vision-Language Models
Xu Han
Linghao Jin
Xuezhe Ma
Xiaofeng Liu
AAML
101
3
0
02 Jul 2024
Expressive and Generalizable Low-rank Adaptation for Large Models via Slow Cascaded Learning
Siwei Li
Yifan Yang
Yifei Shen
Fangyun Wei
Zongqing Lu
L. Qiu
Yuqing Yang
AI4CE
91
3
0
01 Jul 2024
Can Small Language Models Learn, Unlearn, and Retain Noise Patterns?
Nicy Scaria
Silvester John Joseph Kennedy
Deepak N. Subramani
MU
127
2
0
01 Jul 2024
Fighting Randomness with Randomness: Mitigating Optimisation Instability of Fine-Tuning using Delayed Ensemble and Noisy Interpolation
Branislav Pecher
Ján Cegin
Róbert Belanec
Jakub Simko
Ivan Srba
Maria Bielikova
88
1
0
18 Jun 2024
Slight Corruption in Pre-training Data Makes Better Diffusion Models
Hao Chen
Yujin Han
Diganta Misra
Xiang Li
Kai Hu
Difan Zou
Masashi Sugiyama
Jindong Wang
Bhiksha Raj
DiffM
125
5
0
30 May 2024
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of Language Models
Kang He
Yinghan Long
Kaushik Roy
124
2
0
15 Feb 2024
NoisyICL: A Little Noise in Model Parameters Calibrates In-context Learning
Yufeng Zhao
Yoshihiro Sakai
Naoya Inoue
95
6
0
08 Feb 2024
See the Unseen: Better Context-Consistent Knowledge-Editing by Noises
Youcheng Huang
Wenqiang Lei
Zheng Zhang
Jiancheng Lv
Shuicheng Yan
KELM
83
6
0
15 Jan 2024
Dynamic Corrective Self-Distillation for Better Fine-Tuning of Pretrained Models
Ibtihel Amara
Vinija Jain
Aman Chadha
61
0
0
12 Dec 2023
Controlled Randomness Improves the Performance of Transformer Models
Tobias Deuβer
Cong Zhao
Wolfgang Krämer
David Leonhard
Christian Bauckhage
R. Sifa
62
1
0
20 Oct 2023
Unlocking Emergent Modularity in Large Language Models
Zihan Qiu
Zeyu Huang
Jie Fu
91
10
0
17 Oct 2023
Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
Hao Chen
Jindong Wang
Ankit Shah
Ran Tao
Jianguo Huang
Berfin cSimcsek
Masashi Sugiyama
Bhiksha Raj
116
32
0
29 Sep 2023
Improving Video Colorization by Test-Time Tuning
Yaping Zhao
Haitian Zheng
Jiebo Luo
E. Lam
139
6
0
25 Jun 2023
Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference
Junhao Zheng
Qianli Ma
Shengjie Qiu
Yue Wu
Peitian Ma
Junlong Liu
Hu Feng
Xichen Shang
Haibin Chen
AAML
KELM
CML
CLL
130
15
0
19 Jun 2023
Incorporating Distributions of Discourse Structure for Long Document Abstractive Summarization
Dongqi Pu
Yifa Wang
Vera Demberg
93
23
0
26 May 2023
Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net Estimation and Optimization
Shoujie Tong
Heming Xia
Damai Dai
Runxin Xu
Tianyu Liu
Binghuai Lin
Yunbo Cao
Zhifang Sui
59
0
0
24 May 2023
Analyzing and Reducing the Performance Gap in Cross-Lingual Transfer with Fine-tuning Slow and Fast
Yiduo Guo
Yaobo Liang
Dongyan Zhao
Bin Liu
Du Nan
CLL
83
1
0
19 May 2023
HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation
Hongyi Yuan
Zheng Yuan
Chuanqi Tan
Fei Huang
Songfang Huang
99
15
0
17 Dec 2022
Prototypical Fine-tuning: Towards Robust Performance Under Varying Data Sizes
Yiqiao Jin
Xiting Wang
Y. Hao
Yizhou Sun
Xing Xie
94
11
0
24 Nov 2022
Exploring Mode Connectivity for Pre-trained Language Models
Yujia Qin
Cheng Qian
Jing Yi
Weize Chen
Yankai Lin
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
99
21
0
25 Oct 2022
PATS: Sensitivity-aware Noisy Learning for Pretrained Language Models
Yupeng Zhang
Hongzhi Zhang
Sirui Wang
Wei Wu
Zhoujun Li
AAML
94
1
0
22 Oct 2022
Improving Fine-tuning of Self-supervised Models with Contrastive Initialization
Haolin Pan
Yong Guo
Qinyi Deng
Hao-Fan Yang
Yiqun Chen
Jian Chen
SSL
73
21
0
30 Jul 2022
1