Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.01044
Cited By
v1
v2 (latest)
Goldilocks: Just-Right Tuning of BERT for Technology-Assisted Review
3 May 2021
Eugene Yang
Sean MacAvaney
D. Lewis
O. Frieder
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Goldilocks: Just-Right Tuning of BERT for Technology-Assisted Review"
24 / 24 papers shown
Title
Certifying One-Phase Technology-Assisted Reviews
D. Lewis
Eugene Yang
O. Frieder
28
14
0
29 Aug 2021
Heuristic Stopping Rules For Technology-Assisted Review
Eugene Yang
D. Lewis
O. Frieder
33
22
0
18 Jun 2021
On Minimizing Cost in Legal Document Review Workflows
Eugene Yang
D. Lewis
O. Frieder
40
24
0
18 Jun 2021
An Analysis of a BERT Deep Learning Strategy on a Technology Assisted Review Task
Alexandros Ioannidis
32
6
0
16 Apr 2021
Denmark's Participation in the Search Engine TREC COVID-19 Challenge: Lessons Learned about Searching for Precise Biomedical Scientific Information on COVID-19
Lucas Chaves Lima
Casper Hansen
Christian B. Hansen
Dongsheng Wang
Maria Maistro
Birger Larsen
J. Simonsen
Christina Lioma
51
2
0
25 Nov 2020
Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
Yu Gu
Robert Tinn
Hao Cheng
Michael R. Lucas
Naoto Usuyama
Xiaodong Liu
Tristan Naumann
Jianfeng Gao
Hoifung Poon
LM&MA
AI4CE
114
1,782
0
31 Jul 2020
Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Network
Yutai Hou
Wanxiang Che
Y. Lai
Zhihan Zhou
Yijia Liu
Han Liu
Ting Liu
44
193
0
10 Jun 2020
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
877
42,379
0
28 May 2020
Learning with Weak Supervision for Email Intent Detection
Kai Shu
Subhabrata Mukherjee
Guoqing Zheng
Ahmed Hassan Awadallah
Milad Shokouhi
S. Dumais
56
34
0
26 May 2020
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
VLM
AI4CE
CLL
164
2,435
0
23 Apr 2020
Context-Transformer: Tackling Object Confusion for Few-Shot Detection
Ze Yang
Yali Wang
Xianyu Chen
Jianzhuang Liu
Yu Qiao
ViT
105
84
0
16 Mar 2020
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
479
20,317
0
23 Oct 2019
Cross-Lingual BERT Transformation for Zero-Shot Dependency Parsing
Yuxuan Wang
Wanxiang Che
Jiang Guo
Yijia Liu
Ting Liu
64
118
0
15 Sep 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
677
24,541
0
26 Jul 2019
Discriminative Active Learning
Daniel Gissin
Shai Shalev-Shwartz
58
178
0
15 Jul 2019
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang
Zihang Dai
Yiming Yang
J. Carbonell
Ruslan Salakhutdinov
Quoc V. Le
AI4CE
236
8,447
0
19 Jun 2019
Variational Pretraining for Semi-supervised Text Classification
Suchin Gururangan
T. Dang
Dallas Card
Noah A. Smith
VLM
50
112
0
05 Jun 2019
DocBERT: BERT for Document Classification
Ashutosh Adhikari
Achyudh Ram
Raphael Tang
Jimmy J. Lin
LLMAG
VLM
80
299
0
17 Apr 2019
CEDR: Contextualized Embeddings for Document Ranking
Sean MacAvaney
Andrew Yates
Arman Cohan
Nazli Goharian
59
335
0
15 Apr 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.8K
95,175
0
11 Oct 2018
libact: Pool-based Active Learning in Python
Yao-Yuan Yang
Shao-Chuan Lee
Yu-An Chung
Tung-En Wu
Si-An Chen
Hsuan-Tien Lin
AI4CE
KELM
63
63
0
01 Oct 2017
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
786
132,363
0
12 Jun 2017
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Yukun Zhu
Ryan Kiros
R. Zemel
Ruslan Salakhutdinov
R. Urtasun
Antonio Torralba
Sanja Fidler
133
2,554
0
22 Jun 2015
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
854
9,346
0
06 Jun 2015
1