ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.18025
  4. Cited By
Improving Pre-trained Language Model Sensitivity via Mask Specific
  losses: A case study on Biomedical NER

Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER

26 March 2024
Micheal Abaho
Danushka Bollegala
Gary Leeming
Dan Joyce
Iain E Buchan
ArXivPDFHTML

Papers citing "Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER"

3 / 3 papers shown
Title
Meta-learning via Language Model In-context Tuning
Meta-learning via Language Model In-context Tuning
Yanda Chen
Ruiqi Zhong
Sheng Zha
George Karypis
He He
234
156
0
15 Oct 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,918
0
31 Dec 2020
Mixout: Effective Regularization to Finetune Large-scale Pretrained
  Language Models
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
246
205
0
25 Sep 2019
1