ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.11798
  4. Cited By
Can You Label Less by Using Out-of-Domain Data? Active & Transfer
  Learning with Few-shot Instructions

Can You Label Less by Using Out-of-Domain Data? Active & Transfer Learning with Few-shot Instructions

21 November 2022
Rafal Kocielnik
Sara Kangaslahti
Shrimai Prabhumoye
M. Hari
R. Alvarez
Anima Anandkumar
ArXivPDFHTML

Papers citing "Can You Label Less by Using Out-of-Domain Data? Active & Transfer Learning with Few-shot Instructions"

6 / 6 papers shown
Title
Active Learning Principles for In-Context Learning with Large Language
  Models
Active Learning Principles for In-Context Learning with Large Language Models
Katerina Margatina
Timo Schick
Nikolaos Aletras
Jane Dwivedi-Yu
30
39
0
23 May 2023
Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A
  Prompt-Based Uncertainty Propagation Approach
Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A Prompt-Based Uncertainty Propagation Approach
Yue Yu
Rongzhi Zhang
Ran Xu
Jieyu Zhang
Jiaming Shen
Chao Zhang
50
21
0
15 Sep 2022
Good Examples Make A Faster Learner: Simple Demonstration-based Learning
  for Low-resource NER
Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER
Dong-Ho Lee
Akshen Kadakia
Kangmin Tan
Mahak Agarwal
Xinyu Feng
Takashi Shibuya
Ryosuke Mitani
Toshiyuki Sekiya
Jay Pujara
Xiang Ren
40
84
0
16 Oct 2021
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based
  Bias in NLP
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick
Sahana Udupa
Hinrich Schütze
259
374
0
28 Feb 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,919
0
31 Dec 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,821
0
17 Sep 2019
1