ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.07760
  4. Cited By
RETRIEVE: Coreset Selection for Efficient and Robust Semi-Supervised
  Learning

RETRIEVE: Coreset Selection for Efficient and Robust Semi-Supervised Learning

14 June 2021
Krishnateja Killamsetty
Xujiang Zhao
F. Chen
Rishabh K. Iyer
ArXivPDFHTML

Papers citing "RETRIEVE: Coreset Selection for Efficient and Robust Semi-Supervised Learning"

12 / 12 papers shown
Title
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated
  AI-enabled Critical Infrastructure
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated AI-enabled Critical Infrastructure
Zehang Deng
Ruoxi Sun
Minhui Xue
Sheng Wen
S. Çamtepe
Surya Nepal
Yang Xiang
35
1
0
24 May 2024
ATOM: Attention Mixer for Efficient Dataset Distillation
ATOM: Attention Mixer for Efficient Dataset Distillation
Samir Khaki
A. Sajedi
Kai Wang
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
44
3
0
02 May 2024
Effective pruning of web-scale datasets based on complexity of concept
  clusters
Effective pruning of web-scale datasets based on complexity of concept clusters
Amro Abbas
E. Rusak
Kushal Tirumala
Wieland Brendel
Kamalika Chaudhuri
Ari S. Morcos
VLM
CLIP
34
22
0
09 Jan 2024
DEFT: Data Efficient Fine-Tuning for Pre-Trained Language Models via
  Unsupervised Core-Set Selection
DEFT: Data Efficient Fine-Tuning for Pre-Trained Language Models via Unsupervised Core-Set Selection
Devleena Das
Vivek Khetan
21
0
0
25 Oct 2023
Training Ensembles with Inliers and Outliers for Semi-supervised Active
  Learning
Training Ensembles with Inliers and Outliers for Semi-supervised Active Learning
Vladan Stojnić
Zakaria Laskar
Giorgos Tolias
31
0
0
07 Jul 2023
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Patrik Okanovic
R. Waleffe
Vasilis Mageirakos
Konstantinos E. Nikolakakis
Amin Karbasi
Dionysis Kalogerias
Nezihe Merve Gürel
Theodoros Rekatsinas
DD
39
12
0
28 May 2023
DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Sang Michael Xie
Hieu H. Pham
Xuanyi Dong
Nan Du
Hanxiao Liu
Yifeng Lu
Percy Liang
Quoc V. Le
Tengyu Ma
Adams Wei Yu
MoMe
MoE
31
174
0
17 May 2023
A Unified Active Learning Framework for Annotating Graph Data with
  Application to Software Source Code Performance Prediction
A Unified Active Learning Framework for Annotating Graph Data with Application to Software Source Code Performance Prediction
P. Samoaa
Linus Aronsson
Antonio Longa
Philipp Leitner
M. Chehreghani
19
6
0
06 Apr 2023
Open-Set Likelihood Maximization for Few-Shot Learning
Open-Set Likelihood Maximization for Few-Shot Learning
Malik Boudiaf
Etienne Bennequin
Myriam Tami
Antoine Toubhans
Pablo Piantanida
C´eline Hudelot
Ismail Ben Ayed
BDL
26
10
0
20 Jan 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
39
121
0
17 Jan 2023
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
314
11,681
0
09 Mar 2017
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
270
5,660
0
05 Dec 2016
1