ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.03489
  4. Cited By
Position-based Prompting for Health Outcome Generation

Position-based Prompting for Health Outcome Generation

30 March 2022
Micheal Abaho
Danushka Bollegala
P. Williamson
S. Dodd
ArXiv (abs)PDFHTML

Papers citing "Position-based Prompting for Health Outcome Generation"

26 / 26 papers shown
Title
Assessment of contextualised representations in detecting outcome
  phrases in clinical trials
Assessment of contextualised representations in detecting outcome phrases in clinical trials
Micheal Abaho
Danushka Bollegala
P. Williamson
S. Dodd
59
8
0
13 Feb 2022
Can Language Models be Biomedical Knowledge Bases?
Can Language Models be Biomedical Knowledge Bases?
Mujeen Sung
Jinhyuk Lee
Sean S. Yi
Minji Jeon
Sungdong Kim
Jaewoo Kang
AI4MH
170
107
0
15 Sep 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
  in Natural Language Processing
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLMSyDa
216
3,977
0
28 Jul 2021
Template-Based Named Entity Recognition Using BART
Template-Based Named Entity Recognition Using BART
Leyang Cui
Yu Wu
Jian Liu
Sen Yang
Yue Zhang
86
353
0
03 Jun 2021
Detect and Classify -- Joint Span Detection and Classification for
  Health Outcomes
Detect and Classify -- Joint Span Detection and Classification for Health Outcomes
Michael Abaho
Danushka Bollegala
P. Williamson
S. Dodd
114
12
0
15 Apr 2021
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
Guanghui Qin
J. Eisner
63
546
0
14 Apr 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
246
4,261
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
391
1,967
0
31 Dec 2020
UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual
  Embeddings Using the Unified Medical Language System Metathesaurus
UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual Embeddings Using the Unified Medical Language System Metathesaurus
George Michalopoulos
Yuanxin Wang
H. Kaka
Helen H. Chen
Alexander Wong
67
125
0
20 Oct 2020
X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained
  Language Models
X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models
Zhengbao Jiang
Antonios Anastasopoulos
Jun Araki
Haibo Ding
Graham Neubig
HILMKELM
65
143
0
13 Oct 2020
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot
  Learners
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners
Timo Schick
Hinrich Schütze
126
973
0
15 Sep 2020
Language Models as Knowledge Bases: On Entity Representations, Storage
  Capacity, and Paraphrased Queries
Language Models as Knowledge Bases: On Entity Representations, Storage Capacity, and Paraphrased Queries
Benjamin Heinzerling
Kentaro Inui
KELM
57
131
0
20 Aug 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
799
42,055
0
28 May 2020
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
VLMAI4CECLL
152
2,428
0
23 Apr 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
348
1,615
0
21 Jan 2020
How Can We Know What Language Models Know?
How Can We Know What Language Models Know?
Zhengbao Jiang
Frank F. Xu
Jun Araki
Graham Neubig
KELM
132
1,405
0
28 Nov 2019
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language
  Generation, Translation, and Comprehension
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
AIMatVLM
249
10,829
0
29 Oct 2019
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
442
20,181
0
23 Oct 2019
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELMAI4MH
571
2,670
0
03 Sep 2019
Commonsense Knowledge Mining from Pretrained Models
Commonsense Knowledge Mining from Pretrained Models
Joshua Feldman
Joe Davison
Alexander M. Rush
SSL
88
331
0
02 Sep 2019
Position-Aware Self-Attention based Neural Sequence Labeling
Position-Aware Self-Attention based Neural Sequence Labeling
Wei Wei
Zanbo Wang
Xian-Ling Mao
Guangyou Zhou
Pan Zhou
Sheng Jiang
33
25
0
24 Aug 2019
SciBERT: A Pretrained Language Model for Scientific Text
SciBERT: A Pretrained Language Model for Scientific Text
Iz Beltagy
Kyle Lo
Arman Cohan
137
2,974
0
26 Mar 2019
An Embarrassingly Simple Approach for Transfer Learning from Pretrained
  Language Models
An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models
Alexandra Chronopoulou
Christos Baziotis
Alexandros Potamianos
CLL
60
131
0
27 Feb 2019
BioBERT: a pre-trained biomedical language representation model for
  biomedical text mining
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
Jinhyuk Lee
Wonjin Yoon
Sungdong Kim
Donghyeon Kim
Sunkyu Kim
Chan Ho So
Jaewoo Kang
OOD
156
5,659
0
25 Jan 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
94,891
0
11 Oct 2018
A Corpus with Multi-Level Annotations of Patients, Interventions and
  Outcomes to Support Language Processing for Medical Literature
A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature
Benjamin E. Nye
Junyi Jessy Li
Roma Patel
Yinfei Yang
Iain J. Marshall
A. Nenkova
Byron C. Wallace
53
221
0
11 Jun 2018
1