ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.06408
  4. Cited By
Accelerating Batch Active Learning Using Continual Learning Techniques

Accelerating Batch Active Learning Using Continual Learning Techniques

10 May 2023
Arnav M. Das
Gantavya Bhatt
M. Bhalerao
Vianne R. Gao
Rui Yang
J. Bilmes
    VLM
    CLL
ArXivPDFHTML

Papers citing "Accelerating Batch Active Learning Using Continual Learning Techniques"

6 / 6 papers shown
Title
Diversified Batch Selection for Training Acceleration
Diversified Batch Selection for Training Acceleration
Feng Hong
Yueming Lyu
Jiangchao Yao
Ya Zhang
Ivor W. Tsang
Yanfeng Wang
39
4
0
07 Jun 2024
Investigating the Quality of DermaMNIST and Fitzpatrick17k Dermatological Image Datasets
Investigating the Quality of DermaMNIST and Fitzpatrick17k Dermatological Image Datasets
Kumar Abhishek
Aditi Jain
Ghassan Hamarneh
46
3
0
25 Jan 2024
An Experimental Design Framework for Label-Efficient Supervised
  Finetuning of Large Language Models
An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models
Gantavya Bhatt
Yifang Chen
Arnav M. Das
Jifan Zhang
Sang T. Truong
...
Jeff Bilmes
S. Du
Kevin G. Jamieson
Jordan T. Ash
Robert D. Nowak
42
14
0
12 Jan 2024
LabelBench: A Comprehensive Framework for Benchmarking Adaptive
  Label-Efficient Learning
LabelBench: A Comprehensive Framework for Benchmarking Adaptive Label-Efficient Learning
Jifan Zhang
Yifang Chen
Gregory H. Canal
Stephen Mussmann
Arnav M. Das
...
Yinglun Zhu
Jeffrey Bilmes
S. Du
Kevin G. Jamieson
Robert D. Nowak
VLM
33
10
0
16 Jun 2023
Few-Shot Continual Active Learning by a Robot
Few-Shot Continual Active Learning by a Robot
Ali Ayub
C. Fendley
CLL
32
29
0
09 Oct 2022
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,821
0
17 Sep 2019
1