ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.06955
  4. Cited By
Continuous Active Learning Using Pretrained Transformers

Continuous Active Learning Using Pretrained Transformers

15 August 2022
Nima Sadri
G. Cormack
    KELM
ArXivPDFHTML

Papers citing "Continuous Active Learning Using Pretrained Transformers"

5 / 5 papers shown
Title
Annotating Data for Fine-Tuning a Neural Ranker? Current Active Learning
  Strategies are not Better than Random Selection
Annotating Data for Fine-Tuning a Neural Ranker? Current Active Learning Strategies are not Better than Random Selection
Sophia Althammer
Guido Zuccon
Sebastian Hofstatter
Suzan Verberne
Allan Hanbury
34
4
0
12 Sep 2023
Bio-SIEVE: Exploring Instruction Tuning Large Language Models for
  Systematic Review Automation
Bio-SIEVE: Exploring Instruction Tuning Large Language Models for Systematic Review Automation
Ambrose Robinson
William Thorne
Ben Wu
A. Pandor
M. Essat
Mark Stevenson
Xingyi Song
ALM
LM&MA
20
7
0
12 Aug 2023
Goldilocks: Just-Right Tuning of BERT for Technology-Assisted Review
Goldilocks: Just-Right Tuning of BERT for Technology-Assisted Review
Eugene Yang
Sean MacAvaney
D. Lewis
O. Frieder
84
28
0
03 May 2021
The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained
  Sequence-to-Sequence Models
The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models
Ronak Pradeep
Rodrigo Nogueira
Jimmy J. Lin
MoE
61
166
0
14 Jan 2021
Pretrained Transformers for Text Ranking: BERT and Beyond
Pretrained Transformers for Text Ranking: BERT and Beyond
Jimmy J. Lin
Rodrigo Nogueira
Andrew Yates
VLM
239
611
0
13 Oct 2020
1