Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2208.06955
Cited By
Continuous Active Learning Using Pretrained Transformers
15 August 2022
Nima Sadri
G. Cormack
KELM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Continuous Active Learning Using Pretrained Transformers"
5 / 5 papers shown
Title
Annotating Data for Fine-Tuning a Neural Ranker? Current Active Learning Strategies are not Better than Random Selection
Sophia Althammer
Guido Zuccon
Sebastian Hofstatter
Suzan Verberne
Allan Hanbury
34
4
0
12 Sep 2023
Bio-SIEVE: Exploring Instruction Tuning Large Language Models for Systematic Review Automation
Ambrose Robinson
William Thorne
Ben Wu
A. Pandor
M. Essat
Mark Stevenson
Xingyi Song
ALM
LM&MA
20
7
0
12 Aug 2023
Goldilocks: Just-Right Tuning of BERT for Technology-Assisted Review
Eugene Yang
Sean MacAvaney
D. Lewis
O. Frieder
84
28
0
03 May 2021
The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models
Ronak Pradeep
Rodrigo Nogueira
Jimmy J. Lin
MoE
61
166
0
14 Jan 2021
Pretrained Transformers for Text Ranking: BERT and Beyond
Jimmy J. Lin
Rodrigo Nogueira
Andrew Yates
VLM
239
611
0
13 Oct 2020
1