Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2403.18025
Cited By
Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER
26 March 2024
Micheal Abaho
Danushka Bollegala
Gary Leeming
Dan Joyce
Iain E Buchan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER"
3 / 3 papers shown
Title
Meta-learning via Language Model In-context Tuning
Yanda Chen
Ruiqi Zhong
Sheng Zha
George Karypis
He He
234
156
0
15 Oct 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,918
0
31 Dec 2020
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
246
205
0
25 Sep 2019
1