ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.15419
11
4

An Experimental Evaluation of Transformer-based Language Models in the Biomedical Domain

31 December 2020
Paul Grouchy
Shobhit Jain
Michael Liu
Kuhan Wang
Max Tian
Nidhi Arora
Hillary Ngai
Faiza Khan Khattak
Elham Dolatabadi
S. Kocak
    LM&MA
    MedIm
ArXivPDFHTML
Abstract

With the growing amount of text in health data, there have been rapid advances in large pre-trained models that can be applied to a wide variety of biomedical tasks with minimal task-specific modifications. Emphasizing the cost of these models, which renders technical replication challenging, this paper summarizes experiments conducted in replicating BioBERT and further pre-training and careful fine-tuning in the biomedical domain. We also investigate the effectiveness of domain-specific and domain-agnostic pre-trained models across downstream biomedical NLP tasks. Our finding confirms that pre-trained models can be impactful in some downstream NLP tasks (QA and NER) in the biomedical domain; however, this improvement may not justify the high cost of domain-specific pre-training.

View on arXiv
Comments on this paper