ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.13470
  4. Cited By
How Important is Domain Specificity in Language Models and Instruction
  Finetuning for Biomedical Relation Extraction?

How Important is Domain Specificity in Language Models and Instruction Finetuning for Biomedical Relation Extraction?

21 February 2024
Aviv Brokman
Ramakanth Kavuluru
    LM&MAALM
ArXiv (abs)PDFHTML

Papers citing "How Important is Domain Specificity in Language Models and Instruction Finetuning for Biomedical Relation Extraction?"

30 / 30 papers shown
Title
Llama 2: Open Foundation and Fine-Tuned Chat Models
Llama 2: Open Foundation and Fine-Tuned Chat Models
Hugo Touvron
Louis Martin
Kevin R. Stone
Peter Albert
Amjad Almahairi
...
Sharan Narang
Aurelien Rodriguez
Robert Stojnic
Sergey Edunov
Thomas Scialom
AI4MHALM
381
12,044
0
18 Jul 2023
End-to-End $n$-ary Relation Extraction for Combination Drug Therapies
End-to-End nnn-ary Relation Extraction for Combination Drug Therapies
Yuhang Jiang
Ramakanth Kavuluru
67
7
0
29 Mar 2023
Self-Instruct: Aligning Language Models with Self-Generated Instructions
Self-Instruct: Aligning Language Models with Self-Generated Instructions
Yizhong Wang
Yeganeh Kordi
Swaroop Mishra
Alisa Liu
Noah A. Smith
Daniel Khashabi
Hannaneh Hajishirzi
ALMSyDaLRM
142
2,247
0
20 Dec 2022
Scaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language Models
Hyung Won Chung
Le Hou
Shayne Longpre
Barret Zoph
Yi Tay
...
Jacob Devlin
Adam Roberts
Denny Zhou
Quoc V. Le
Jason W. Wei
ReLMLRM
206
3,150
0
20 Oct 2022
BioGPT: Generative Pre-trained Transformer for Biomedical Text
  Generation and Mining
BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining
Renqian Luo
Liai Sun
Yingce Xia
Tao Qin
Sheng Zhang
Hoifung Poon
Tie-Yan Liu
MedImAI4CELM&MA
97
841
0
19 Oct 2022
BioBART: Pretraining and Evaluation of A Biomedical Generative Language
  Model
BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model
Hongyi Yuan
Zheng Yuan
Ruyi Gan
Jiaxing Zhang
Yutao Xie
Sheng Yu
LM&MA
74
131
0
08 Apr 2022
A sequence-to-sequence approach for document-level relation extraction
A sequence-to-sequence approach for document-level relation extraction
John Giorgi
Gary D. Bader
Bo Wang
97
52
0
03 Apr 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLMALM
886
13,176
0
04 Mar 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
355
1,708
0
15 Oct 2021
Discovering Drug-Target Interaction Knowledge from Biomedical Literature
Discovering Drug-Target Interaction Knowledge from Biomedical Literature
Yutai Hou
Yingce Xia
Lijun Wu
Shufang Xie
Yang Fan
Jinhua Zhu
Wanxiang Che
Tao Qin
Tie-Yan Liu
53
15
0
27 Sep 2021
Finetuned Language Models Are Zero-Shot Learners
Finetuned Language Models Are Zero-Shot Learners
Jason W. Wei
Maarten Bosma
Vincent Zhao
Kelvin Guu
Adams Wei Yu
Brian Lester
Nan Du
Andrew M. Dai
Quoc V. Le
ALMUQCV
230
3,782
0
03 Sep 2021
SciFive: a text-to-text transformer model for biomedical literature
SciFive: a text-to-text transformer model for biomedical literature
Long Phan
J. Anibal
H. Tran
Shaurya Chanana
Erol Bahadroglu
Alec Peltekian
G. Altan-Bonnet
MedIm
49
151
0
28 May 2021
An End-to-end Model for Entity-level Relation Extraction using
  Multi-instance Learning
An End-to-end Model for Entity-level Relation Extraction using Multi-instance Learning
Markus Eberts
A. Ulges
57
63
0
11 Feb 2021
BioMegatron: Larger Biomedical Domain Language Model
BioMegatron: Larger Biomedical Domain Language Model
Hoo-Chang Shin
Yang Zhang
Evelina Bakhturina
Raul Puri
M. Patwary
Mohammad Shoeybi
Raghav Mani
AI4CE
63
148
0
12 Oct 2020
Learning to summarize from human feedback
Learning to summarize from human feedback
Nisan Stiennon
Long Ouyang
Jeff Wu
Daniel M. Ziegler
Ryan J. Lowe
Chelsea Voss
Alec Radford
Dario Amodei
Paul Christiano
ALM
259
2,184
0
02 Sep 2020
Domain-Specific Language Model Pretraining for Biomedical Natural
  Language Processing
Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
Yu Gu
Robert Tinn
Hao Cheng
Michael R. Lucas
Naoto Usuyama
Xiaodong Liu
Tristan Naumann
Jianfeng Gao
Hoifung Poon
LM&MAAI4CE
114
1,782
0
31 Jul 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
877
42,379
0
28 May 2020
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
VLMAI4CECLL
164
2,435
0
23 Apr 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
348
1,617
0
21 Jan 2020
CopyMTL: Copy Mechanism for Joint Extraction of Entities and Relations
  with Multi-Task Learning
CopyMTL: Copy Mechanism for Joint Extraction of Entities and Relations with Multi-Task Learning
Daojian Zeng
Haoran Zhang
Qianying Liu
56
190
0
24 Nov 2019
Effective Modeling of Encoder-Decoder Architecture for Joint Entity and
  Relation Extraction
Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction
Tapas Nayak
Hwee Tou Ng
3DV
70
234
0
22 Nov 2019
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language
  Generation, Translation, and Comprehension
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
AIMatVLM
264
10,851
0
29 Oct 2019
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
475
20,317
0
23 Oct 2019
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
Mohammad Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
334
1,917
0
17 Sep 2019
Span-based Joint Entity and Relation Extraction with Transformer
  Pre-training
Span-based Joint Entity and Relation Extraction with Transformer Pre-training
Markus Eberts
A. Ulges
LRMViT
212
386
0
17 Sep 2019
Transfer Learning in Biomedical Natural Language Processing: An
  Evaluation of BERT and ELMo on Ten Benchmarking Datasets
Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets
Yifan Peng
Shankai Yan
Zhiyong Lu
LM&MAAI4MH
71
845
0
13 Jun 2019
Publicly Available Clinical BERT Embeddings
Publicly Available Clinical BERT Embeddings
Emily Alsentzer
John R. Murphy
Willie Boag
W. Weng
Di Jin
Tristan Naumann
Matthew B. A. McDermott
AI4MH
174
1,983
0
06 Apr 2019
BioBERT: a pre-trained biomedical language representation model for
  biomedical text mining
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
Jinhyuk Lee
Wonjin Yoon
Sungdong Kim
Donghyeon Kim
Sunkyu Kim
Chan Ho So
Jaewoo Kang
OOD
178
5,672
0
25 Jan 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
95,175
0
11 Oct 2018
Deep reinforcement learning from human preferences
Deep reinforcement learning from human preferences
Paul Christiano
Jan Leike
Tom B. Brown
Miljan Martic
Shane Legg
Dario Amodei
218
3,365
0
12 Jun 2017
1