ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.08572
  4. Cited By
Flexible Dataset Distillation: Learn Labels Instead of Images

Flexible Dataset Distillation: Learn Labels Instead of Images

15 June 2020
Ondrej Bohdal
Yongxin Yang
Timothy M. Hospedales
    DD
ArXivPDFHTML

Papers citing "Flexible Dataset Distillation: Learn Labels Instead of Images"

33 / 33 papers shown
Title
Transferable text data distillation by trajectory matching
Transferable text data distillation by trajectory matching
Rong Yao
Hailin Hu
Yifei Fu
Hanting Chen
Wenyi Fang
Fanyi Du
Kai Han
Yunhe Wang
28
0
0
14 Apr 2025
Teddy: Efficient Large-Scale Dataset Distillation via
  Taylor-Approximated Matching
Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching
Ruonan Yu
Songhua Liu
Jingwen Ye
Xinchao Wang
DD
40
4
0
10 Oct 2024
A Label is Worth a Thousand Images in Dataset Distillation
A Label is Worth a Thousand Images in Dataset Distillation
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
94
12
0
15 Jun 2024
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost
Xinyi Shang
Peng Sun
Tao Lin
53
3
0
23 May 2024
ATOM: Attention Mixer for Efficient Dataset Distillation
ATOM: Attention Mixer for Efficient Dataset Distillation
Samir Khaki
A. Sajedi
Kai Wang
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
50
3
0
02 May 2024
DiLM: Distilling Dataset into Language Model for Text-level Dataset
  Distillation
DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation
Aru Maekawa
Satoshi Kosugi
Kotaro Funakoshi
Manabu Okumura
DD
49
10
0
30 Mar 2024
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset
  Distillation
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation
Yifan Wu
Jiawei Du
Ping Liu
Yuewei Lin
Wenqing Cheng
Wei-ping Xu
DD
AAML
40
5
0
20 Mar 2024
AST: Effective Dataset Distillation through Alignment with Smooth and
  High-Quality Expert Trajectories
AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert Trajectories
Jiyuan Shen
Wenzhuo Yang
Kwok-Yan Lam
DD
35
1
0
16 Oct 2023
DataDAM: Efficient Dataset Distillation with Attention Matching
DataDAM: Efficient Dataset Distillation with Attention Matching
A. Sajedi
Samir Khaki
Ehsan Amjadian
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
DD
46
60
0
29 Sep 2023
Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching
Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching
Tao Feng
Jie Zhang
Peizheng Wang
Zhijie Wang
Shengyuan Pang
DD
53
0
0
29 May 2023
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Patrik Okanovic
R. Waleffe
Vasilis Mageirakos
Konstantinos E. Nikolakakis
Amin Karbasi
Dionysis Kalogerias
Nezihe Merve Gürel
Theodoros Rekatsinas
DD
53
12
0
28 May 2023
Provable Data Subset Selection For Efficient Neural Network Training
Provable Data Subset Selection For Efficient Neural Network Training
M. Tukan
Samson Zhou
Alaa Maalouf
Daniela Rus
Vladimir Braverman
Dan Feldman
MLT
31
9
0
09 Mar 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
55
121
0
17 Jan 2023
A Comprehensive Survey of Dataset Distillation
A Comprehensive Survey of Dataset Distillation
Shiye Lei
Dacheng Tao
DD
47
88
0
13 Jan 2023
Data Distillation: A Survey
Data Distillation: A Survey
Noveen Sachdeva
Julian McAuley
DD
45
73
0
11 Jan 2023
Accelerating Dataset Distillation via Model Augmentation
Accelerating Dataset Distillation via Model Augmentation
Lei Zhang
Jie M. Zhang
Bowen Lei
Subhabrata Mukherjee
Xiang Pan
Bo Zhao
Caiwen Ding
Heng Chang
Dongkuan Xu
DD
45
62
0
12 Dec 2022
Towards Robust Dataset Learning
Towards Robust Dataset Learning
Yihan Wu
Xinda Li
Florian Kerschbaum
Heng Huang
Hongyang R. Zhang
DD
OOD
49
10
0
19 Nov 2022
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
Justin Cui
Ruochen Wang
Si Si
Cho-Jui Hsieh
DD
34
131
0
19 Nov 2022
Dataset Distillation via Factorization
Dataset Distillation via Factorization
Songhua Liu
Kai Wang
Xingyi Yang
Jingwen Ye
Xinchao Wang
DD
136
142
0
30 Oct 2022
Dataset Distillation using Neural Feature Regression
Dataset Distillation using Neural Feature Regression
Yongchao Zhou
E. Nezhadarya
Jimmy Ba
DD
FedML
53
151
0
01 Jun 2022
Privacy for Free: How does Dataset Condensation Help Privacy?
Privacy for Free: How does Dataset Condensation Help Privacy?
Tian Dong
Bo Zhao
Lingjuan Lyu
DD
24
113
0
01 Jun 2022
Synthesizing Informative Training Samples with GAN
Synthesizing Informative Training Samples with GAN
Bo Zhao
Hakan Bilen
DD
37
75
0
15 Apr 2022
Dataset Distillation by Matching Training Trajectories
Dataset Distillation by Matching Training Trajectories
George Cazenavette
Tongzhou Wang
Antonio Torralba
Alexei A. Efros
Jun-Yan Zhu
FedML
DD
78
366
0
22 Mar 2022
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from
  a Single Image
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single Image
Yuki M. Asano
Aaqib Saeed
43
7
0
01 Dec 2021
Graph Condensation for Graph Neural Networks
Graph Condensation for Graph Neural Networks
Wei Jin
Lingxiao Zhao
Shichang Zhang
Yozen Liu
Jiliang Tang
Neil Shah
DD
AI4CE
27
148
0
14 Oct 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
51
231
0
27 Jul 2021
Meta-Calibration: Learning of Model Calibration Using Differentiable
  Expected Calibration Error
Meta-Calibration: Learning of Model Calibration Using Differentiable Expected Calibration Error
Ondrej Bohdal
Yongxin Yang
Timothy M. Hospedales
UQCV
OOD
43
21
0
17 Jun 2021
Data Distillation for Text Classification
Data Distillation for Text Classification
Yongqi Li
Wenjie Li
DD
24
28
0
17 Apr 2021
Dataset Meta-Learning from Kernel Ridge-Regression
Dataset Meta-Learning from Kernel Ridge-Regression
Timothy Nguyen
Zhourung Chen
Jaehoon Lee
DD
36
240
0
30 Oct 2020
Dataset Condensation with Gradient Matching
Dataset Condensation with Gradient Matching
Bo Zhao
Konda Reddy Mopuri
Hakan Bilen
DD
36
477
0
10 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,851
0
09 Jun 2020
Meta-Learning in Neural Networks: A Survey
Meta-Learning in Neural Networks: A Survey
Timothy M. Hospedales
Antreas Antoniou
P. Micaelli
Amos Storkey
OOD
87
1,935
0
11 Apr 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
410
11,700
0
09 Mar 2017
1