Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.11932
Cited By
Dataset Distillation by Matching Training Trajectories
22 March 2022
George Cazenavette
Tongzhou Wang
Antonio Torralba
Alexei A. Efros
Jun-Yan Zhu
FedML
DD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Dataset Distillation by Matching Training Trajectories"
50 / 259 papers shown
Title
Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching
Shitong Shao
Zeyuan Yin
Muxin Zhou
Xindong Zhang
Zhiqiang Shen
DD
27
21
0
29 Nov 2023
QuickDrop: Efficient Federated Unlearning by Integrated Dataset Distillation
Akash Dhasade
Yaohong Ding
Song Guo
Anne-Marie Kermarrec
M. Vos
Leijie Wu
MU
DD
16
11
0
27 Nov 2023
Dataset Distillation in Latent Space
Yuxuan Duan
Jianfu Zhang
Liqing Zhang
DD
45
6
0
27 Nov 2023
Efficient Dataset Distillation via Minimax Diffusion
Jianyang Gu
Saeed Vahidian
Vyacheslav Kungurtsev
Haonan Wang
Wei Jiang
Yang You
Yiran Chen
DD
39
27
0
27 Nov 2023
Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for Enhanced Dataset Pruning
Xin Zhang
Jiawei Du
Yunsong Li
Weiying Xie
Qiufeng Wang
37
7
0
22 Nov 2023
Frequency Domain-based Dataset Distillation
DongHyeok Shin
Seungjae Shin
Il-Chul Moon
DD
35
19
0
15 Nov 2023
Embarassingly Simple Dataset Distillation
Yunzhen Feng
Ramakrishna Vedantam
Julia Kempe
DD
36
5
0
13 Nov 2023
Sequential Subset Matching for Dataset Distillation
Jiawei Du
Qin Shi
Qiufeng Wang
DD
33
27
0
02 Nov 2023
Distil the informative essence of loop detector data set: Is network-level traffic forecasting hungry for more data?
Guopeng Li
V. Knoop
J. W. C.
J. V. Lint
8
1
0
31 Oct 2023
One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation
Zhiwei Hao
Jianyuan Guo
Kai Han
Yehui Tang
Han Hu
Yunhe Wang
Chang Xu
38
58
0
30 Oct 2023
Function Space Bayesian Pseudocoreset for Bayesian Neural Networks
Balhae Kim
Hyungi Lee
Juho Lee
BDL
29
2
0
27 Oct 2023
Data Optimization in Deep Learning: A Survey
Ou Wu
Rujing Yao
35
1
0
25 Oct 2023
Data Pruning via Moving-one-Sample-out
Haoru Tan
Sitong Wu
Fei Du
Yukang Chen
Zhibin Wang
Fan Wang
Xiaojuan Qi
52
31
0
23 Oct 2023
You Only Condense Once: Two Rules for Pruning Condensed Datasets
Yang He
Lingao Xiao
Qiufeng Wang
37
14
0
21 Oct 2023
Fast Graph Condensation with Structure-based Neural Tangent Kernel
Lin Wang
Wenqi Fan
Jiatong Li
Yao Ma
Qing Li
DD
34
27
0
17 Oct 2023
Heterogenous Memory Augmented Neural Networks
Zihan Qiu
Zhen Liu
Shuicheng Yan
Shanghang Zhang
Jie Fu
17
0
0
17 Oct 2023
AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert Trajectories
Jiyuan Shen
Wenzhuo Yang
Kwok-Yan Lam
DD
29
1
0
16 Oct 2023
Real-Fake: Effective Training Data Synthesis Through Distribution Matching
Jianhao Yuan
Jie Zhang
Shuyang Sun
Philip H. S. Torr
Bo-Lu Zhao
30
22
0
16 Oct 2023
Farzi Data: Autoregressive Data Distillation
Noveen Sachdeva
Zexue He
Wang-Cheng Kang
Jianmo Ni
D. Cheng
Julian McAuley
DD
21
3
0
15 Oct 2023
Graph Distillation with Eigenbasis Matching
Yang Liu
Deyu Bo
Chuan Shi
DD
26
9
0
13 Oct 2023
Does Graph Distillation See Like Vision Dataset Counterpart?
Beining Yang
Kai Wang
Qingyun Sun
Cheng Ji
Xingcheng Fu
Hao Tang
Yang You
Jianxin Li
DD
17
38
0
13 Oct 2023
D2 Pruning: Message Passing for Balancing Diversity and Difficulty in Data Pruning
A. Maharana
Prateek Yadav
Mohit Bansal
27
28
0
11 Oct 2023
Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation
Haizhong Zheng
Jiachen Sun
Shutong Wu
B. Kailkhura
Zhuoqing Mao
Chaowei Xiao
Atul Prakash
DD
24
2
0
11 Oct 2023
Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality
Xuxi Chen
Yu Yang
Zhangyang Wang
Baharan Mirzasoleiman
DD
26
12
0
10 Oct 2023
Self-Supervised Dataset Distillation for Transfer Learning
Dong Bok Lee
Seanie Lee
Joonho Ko
Kenji Kawaguchi
Juho Lee
Sung Ju Hwang
DD
52
3
0
10 Oct 2023
Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
Ziyao Guo
Kai Wang
George Cazenavette
Hui Li
Kaipeng Zhang
Yang You
DD
32
62
0
09 Oct 2023
Can pre-trained models assist in dataset distillation?
Yao Lu
Xuguang Chen
Yuchen Zhang
Jianyang Gu
Tianle Zhang
Yifan Zhang
Xiaoniu Yang
Qi Xuan
Kai Wang
Yang You
DD
37
10
0
05 Oct 2023
DataDAM: Efficient Dataset Distillation with Attention Matching
A. Sajedi
Samir Khaki
Ehsan Amjadian
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
DD
46
59
0
29 Sep 2023
Dataset Condensation via Generative Model
David Junhao Zhang
Heng Wang
Chuhui Xue
Rui Yan
Wenqing Zhang
Song Bai
Mike Zheng Shou
DD
26
11
0
14 Sep 2023
Prototype-based Dataset Comparison
Nanne van Noord
31
6
0
05 Sep 2023
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
Kushal Tirumala
Daniel Simig
Armen Aghajanyan
Ari S. Morcos
SyDa
13
104
0
23 Aug 2023
Dataset Quantization
Daquan Zhou
Kaixin Wang
Jianyang Gu
Xiang Peng
Dongze Lian
Yifan Zhang
Yang You
Jiashi Feng
DD
29
38
0
21 Aug 2023
Vision-Language Dataset Distillation
Xindi Wu
Byron Zhang
Zhiwei Deng
Olga Russakovsky
DD
VLM
33
8
0
15 Aug 2023
Exploring Multilingual Text Data Distillation
Shivam Sahni
Harsh Patel
DD
19
2
0
09 Aug 2023
An Introduction to Bi-level Optimization: Foundations and Applications in Signal Processing and Machine Learning
Yihua Zhang
Prashant Khanduri
Ioannis C. Tsaknakis
Yuguang Yao
Min-Fong Hong
Sijia Liu
AI4CE
41
25
0
01 Aug 2023
Graph Condensation for Inductive Node Representation Learning
Xin Gao
Tong Chen
Yilong Zang
Wentao Zhang
Quoc Viet Hung Nguyen
Kai Zheng
Hongzhi Yin
DD
AI4CE
33
33
0
29 Jul 2023
Rethinking Data Distillation: Do Not Overlook Calibration
Dongyao Zhu
Bowen Lei
Jie M. Zhang
Yanbo Fang
Ruqi Zhang
Yiqun Xie
Dongkuan Xu
DD
FedML
23
15
0
24 Jul 2023
Improved Distribution Matching for Dataset Condensation
Ganlong Zhao
Guanbin Li
Yipeng Qin
Yizhou Yu
DD
23
80
0
19 Jul 2023
Towards Trustworthy Dataset Distillation
Shijie Ma
Fei Zhu
Zhen Cheng
Xu-Yao Zhang
DD
37
14
0
18 Jul 2023
Image Captions are Natural Prompts for Text-to-Image Models
Shiye Lei
Hao Chen
Senyang Zhang
Bo-Lu Zhao
Dacheng Tao
VLM
32
19
0
17 Jul 2023
A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning
Zhenyi Wang
Enneng Yang
Li Shen
Heng-Chiao Huang
KELM
MU
34
48
0
16 Jul 2023
Dataset Distillation Meets Provable Subset Selection
M. Tukan
Alaa Maalouf
Margarita Osadchy
DD
37
4
0
16 Jul 2023
Distilled Pruning: Using Synthetic Data to Win the Lottery
Luke McDermott
Daniel Cummings
SyDa
DD
34
1
0
07 Jul 2023
Federated Generative Learning with Foundation Models
Jie Zhang
Xiaohua Qi
Bo-Lu Zhao
FedML
39
21
0
28 Jun 2023
Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective
Zeyuan Yin
Eric P. Xing
Zhiqiang Shen
DD
20
63
0
22 Jun 2023
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLM
OffRL
86
22
0
19 Jun 2023
Globally Interpretable Graph Learning via Distribution Matching
Yi Nian
Yurui Chang
Wei Jin
Lu Lin
OOD
58
4
0
18 Jun 2023
Large-scale Dataset Pruning with Dynamic Uncertainty
Muyang He
Shuo Yang
Tiejun Huang
Bo-Lu Zhao
36
25
0
08 Jun 2023
Increasing Diversity While Maintaining Accuracy: Text Data Generation with Large Language Models and Human Interventions
John Joon Young Chung
Ece Kamar
Saleema Amershi
ALM
34
109
0
07 Jun 2023
Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data
Xin Zheng
Miao Zhang
C. Chen
Quoc Viet Hung Nguyen
Xingquan Zhu
Shirui Pan
DD
44
59
0
05 Jun 2023
Previous
1
2
3
4
5
6
Next