Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.07859
Cited By
Extracting Biomedical Factual Knowledge Using Pretrained Language Model and Electronic Health Record Context
26 August 2022
Zonghai Yao
Yi Cao
Zhichao Yang
Vijeta Deshpande
Hong-ye Yu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Extracting Biomedical Factual Knowledge Using Pretrained Language Model and Electronic Health Record Context"
16 / 16 papers shown
Title
Can Language Models be Biomedical Knowledge Bases?
Mujeen Sung
Jinhyuk Lee
Sean S. Yi
Minji Jeon
Sungdong Kim
Jaewoo Kang
AI4MH
145
107
0
15 Sep 2021
Improving Formality Style Transfer with Context-Aware Rule Injection
Zonghai Yao
Hong-ye Yu
50
16
0
01 Jun 2021
Factual Probing Is [MASK]: Learning vs. Learning to Recall
Zexuan Zhong
Dan Friedman
Danqi Chen
45
410
0
12 Apr 2021
Zero-shot Entity Linking with Efficient Long Range Sequence Modeling
Zonghai Yao
Liangliang Cao
Huapu Pan
VLM
83
21
0
12 Oct 2020
How Context Affects Language Models' Factual Predictions
Fabio Petroni
Patrick Lewis
Aleksandra Piktus
Tim Rocktaschel
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
47
239
0
10 May 2020
ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning
Michael Boratko
Xiang Lorraine Li
Rajarshi Das
Timothy J. O'Gorman
Daniel Le
Andrew McCallum
68
58
0
02 May 2020
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
VLM
AI4CE
CLL
134
2,414
0
23 Apr 2020
How Can We Know What Language Models Know?
Zhengbao Jiang
Frank F. Xu
Jun Araki
Graham Neubig
KELM
123
1,402
0
28 Nov 2019
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
558
2,660
0
03 Sep 2019
Universal Adversarial Triggers for Attacking and Analyzing NLP
Eric Wallace
Shi Feng
Nikhil Kandpal
Matt Gardner
Sameer Singh
AAML
SILM
109
865
0
20 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
514
24,351
0
26 Jul 2019
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
Jinhyuk Lee
Wonjin Yoon
Sungdong Kim
Donghyeon Kim
Sunkyu Kim
Chan Ho So
Jaewoo Kang
OOD
137
5,628
0
25 Jan 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.4K
94,511
0
11 Oct 2018
Dissecting Contextual Word Embeddings: Architecture and Representation
Matthew E. Peters
Mark Neumann
Luke Zettlemoyer
Wen-tau Yih
92
429
0
27 Aug 2018
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
624
130,942
0
12 Jun 2017
Reading Wikipedia to Answer Open-Domain Questions
Danqi Chen
Adam Fisch
Jason Weston
Antoine Bordes
RALM
108
2,007
0
31 Mar 2017
1