ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.02916
  4. Cited By
Remember the Past: Distilling Datasets into Addressable Memories for
  Neural Networks
v1v2 (latest)

Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks

Neural Information Processing Systems (NeurIPS), 2022
6 June 2022
Zhiwei Deng
Olga Russakovsky
    FedMLDD
ArXiv (abs)PDFHTMLGithub (39★)

Papers citing "Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks"

38 / 38 papers shown
Title
Computational Budget Should Be Considered in Data Selection
Computational Budget Should Be Considered in Data Selection
Weilin Wan
Weizhong Zhang
Cheng Jin
56
0
0
19 Oct 2025
Beyond Pixels: Efficient Dataset Distillation via Sparse Gaussian Representation
Beyond Pixels: Efficient Dataset Distillation via Sparse Gaussian Representation
Chenyang Jiang
Zhengcen Li
Hang Zhao
Qiben Shan
Shaocong Wu
Jingyong Su
DD
76
0
0
30 Sep 2025
AdapSNE: Adaptive Fireworks-Optimized and Entropy-Guided Dataset Sampling for Edge DNN Training
AdapSNE: Adaptive Fireworks-Optimized and Entropy-Guided Dataset Sampling for Edge DNN Training
Boran Zhao
Hetian Liu
Zihang Yuan
Li Zhu
Fan Yang
Lina Xie Tian Xia
Wenzhe zhao
Pengju Ren
AAML
48
0
0
19 Aug 2025
NMS: Efficient Edge DNN Training via Near-Memory Sampling on Manifolds
NMS: Efficient Edge DNN Training via Near-Memory Sampling on Manifolds
Boran Zhao
Haiduo Huang
Qiwei Dang
Wenzhe zhao
Tian Xia
Pengju Ren
88
2
0
04 Aug 2025
Enhancing Diffusion-based Dataset Distillation via Adversary-Guided Curriculum Sampling
Enhancing Diffusion-based Dataset Distillation via Adversary-Guided Curriculum Sampling
Lexiao Zou
Gongwei Chen
Yanda Chen
Miao Zhang
DD
131
0
0
02 Aug 2025
Boost Self-Supervised Dataset Distillation via Parameterization, Predefined Augmentation, and Approximation
Boost Self-Supervised Dataset Distillation via Parameterization, Predefined Augmentation, and ApproximationInternational Conference on Learning Representations (ICLR), 2025
Sheng-Feng Yu
Jia-Jiun Yao
Wei-Chen Chiu
DD
234
1
0
29 Jul 2025
Dataset Distillation as Data Compression: A Rate-Utility Perspective
Dataset Distillation as Data Compression: A Rate-Utility Perspective
Youneng Bao
Yiping Liu
Zhuo Chen
Yongsheng Liang
Mu Li
Kede Ma
DD
147
0
0
23 Jul 2025
DD-Ranking: Rethinking the Evaluation of Dataset Distillation
DD-Ranking: Rethinking the Evaluation of Dataset Distillation
Zekai Li
Xinhao Zhong
Samir Khaki
Zhiyuan Liang
Yuhao Zhou
...
Konstantinos N Plataniotis
Zhangyang Wang
Bo Zhao
Yang You
Kai Wang
DD
426
7
0
19 May 2025
A Large-Scale Study on Video Action Dataset Condensation
A Large-Scale Study on Video Action Dataset Condensation
Yang Chen
Sheng Guo
Bo Zheng
Limin Wang
DD
299
5
0
13 Mar 2025
Distilling Dataset into Neural FieldInternational Conference on Learning Representations (ICLR), 2025
DongHyeok Shin
Heesun Bae
Gyuwon Sim
Wanmo Kang
Il-Chul Moon
DD
234
3
0
05 Mar 2025
Dataset Distillation via Committee Voting
Dataset Distillation via Committee Voting
Jiacheng Cui
Zhaoyi Li
Xiaochen Ma
Xinyue Bi
Yaxin Luo
Zhiqiang Shen
DDFedML
203
4
0
13 Jan 2025
FairDD: Fair Dataset Distillation
FairDD: Fair Dataset Distillation
Qihang Zhou
Shenhao Fang
Shibo He
Wenchao Meng
Jiming Chen
FedMLDD
345
1
0
29 Nov 2024
Dataset Distillation-based Hybrid Federated Learning on Non-IID Data
Dataset Distillation-based Hybrid Federated Learning on Non-IID Data
Xiufang Shi
Wei Zhang
Mincheng Wu
Guangyi Liu
Z. Wen
Shibo He
Tejal Shah
R. Ranjan
DDFedML
175
1
0
26 Sep 2024
FYI: Flip Your Images for Dataset Distillation
FYI: Flip Your Images for Dataset Distillation
Byunggwan Son
Youngmin Oh
Donghyeon Baek
Bumsub Ham
DD
233
3
0
11 Jul 2024
Behaviour Distillation
Behaviour Distillation
Andrei Lupu
Chris Xiaoxuan Lu
Jarek Liesen
R. T. Lange
Jakob Foerster
DD
172
6
0
21 Jun 2024
A Label is Worth a Thousand Images in Dataset Distillation
A Label is Worth a Thousand Images in Dataset DistillationNeural Information Processing Systems (NeurIPS), 2024
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
306
17
0
15 Jun 2024
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero CostInternational Conference on Learning Representations (ICLR), 2024
Xinyi Shang
Peng Sun
Tao Lin
231
4
0
23 May 2024
Graph Data Condensation via Self-expressive Graph Structure
  Reconstruction
Graph Data Condensation via Self-expressive Graph Structure ReconstructionKnowledge Discovery and Data Mining (KDD), 2024
Zhanyu Liu
Chaolv Zeng
Guanjie Zheng
DD
178
14
0
12 Mar 2024
Distributional Dataset Distillation with Subtask Decomposition
Distributional Dataset Distillation with Subtask Decomposition
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
227
5
0
01 Mar 2024
Disentangled Condensation for Large-scale Graphs
Disentangled Condensation for Large-scale Graphs
Zhenbang Xiao
Shunyu Liu
Yu Wang
Tongya Zheng
Weilong Dai
Mingli Song
Tongya Zheng
DD
415
10
0
18 Jan 2024
MIM4DD: Mutual Information Maximization for Dataset Distillation
MIM4DD: Mutual Information Maximization for Dataset Distillation
Yuzhang Shang
Zhihang Yuan
Yan Yan
DD
132
20
0
27 Dec 2023
Dancing with Still Images: Video Distillation via Static-Dynamic
  Disentanglement
Dancing with Still Images: Video Distillation via Static-Dynamic DisentanglementComputer Vision and Pattern Recognition (CVPR), 2023
Ziyu Wang
Yue Xu
Cewu Lu
Yong-Lu Li
DD
196
15
0
01 Dec 2023
Dataset Distillation via the Wasserstein Metric
Dataset Distillation via the Wasserstein Metric
Haoyang Liu
Yijiang Li
Tiancheng Xing
Peiran Wang
Vibhu Dalal
Luwei Li
Jingrui He
Haohan Wang
DD
251
18
0
30 Nov 2023
Efficient Dataset Distillation via Minimax Diffusion
Efficient Dataset Distillation via Minimax DiffusionComputer Vision and Pattern Recognition (CVPR), 2023
Jianyang Gu
Saeed Vahidian
Vyacheslav Kungurtsev
Haonan Wang
Wei Jiang
Yang You
Yiran Chen
DD
152
54
0
27 Nov 2023
Frequency Domain-based Dataset Distillation
Frequency Domain-based Dataset DistillationNeural Information Processing Systems (NeurIPS), 2023
DongHyeok Shin
Seungjae Shin
Il-Chul Moon
DD
174
26
0
15 Nov 2023
Data Optimization in Deep Learning: A Survey
Data Optimization in Deep Learning: A SurveyIEEE Transactions on Knowledge and Data Engineering (TKDE), 2023
Ou Wu
Rujing Yao
205
4
0
25 Oct 2023
Leveraging Hierarchical Feature Sharing for Efficient Dataset
  Condensation
Leveraging Hierarchical Feature Sharing for Efficient Dataset CondensationEuropean Conference on Computer Vision (ECCV), 2023
Haizhong Zheng
Jiachen Sun
Shutong Wu
B. Kailkhura
Zhuoqing Mao
Chaowei Xiao
Atul Prakash
DD
145
5
0
11 Oct 2023
Vision-Language Dataset Distillation
Vision-Language Dataset Distillation
Xindi Wu
Byron Zhang
Zhiwei Deng
Olga Russakovsky
DDVLM
214
12
0
15 Aug 2023
Rethinking Data Distillation: Do Not Overlook Calibration
Rethinking Data Distillation: Do Not Overlook CalibrationIEEE International Conference on Computer Vision (ICCV), 2023
Dongyao Zhu
Bowen Lei
Jie M. Zhang
Yanbo Fang
Ruqi Zhang
Yiqun Xie
Dongkuan Xu
DDFedML
189
16
0
24 Jul 2023
The Importance of Robust Features in Mitigating Catastrophic Forgetting
The Importance of Robust Features in Mitigating Catastrophic ForgettingInternational Symposium on Computers and Communications (ISCC), 2023
Hikmat Khan
N. Bouaynaya
Ghulam Rasool
117
9
0
29 Jun 2023
Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale
  From A New Perspective
Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New PerspectiveNeural Information Processing Systems (NeurIPS), 2023
Zeyuan Yin
Eric P. Xing
Zhiqiang Shen
DD
244
109
0
22 Jun 2023
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Repeated Random Sampling for Minimizing the Time-to-Accuracy of LearningInternational Conference on Learning Representations (ICLR), 2023
Patrik Okanovic
R. Waleffe
Vasilis Mageirakos
Konstantinos E. Nikolakakis
Amin Karbasi
Dionysis Kalogerias
Nezihe Merve Gürel
Theodoros Rekatsinas
DD
173
20
0
28 May 2023
Distill Gold from Massive Ores: Efficient Dataset Distillation via
  Critical Samples Selection
Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection
Yue Xu
Yong-Lu Li
Kaitong Cui
Ziyu Wang
Cewu Lu
Yu-Wing Tai
Chi-Keung Tang
DD
230
8
0
28 May 2023
DiM: Distilling Dataset into Generative Model
DiM: Distilling Dataset into Generative Model
Kai Wang
Jianyang Gu
Daquan Zhou
Zheng Hua Zhu
Wei Jiang
Yang You
DD
180
48
0
08 Mar 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive ReviewIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
270
153
0
17 Jan 2023
A Comprehensive Survey of Dataset Distillation
A Comprehensive Survey of Dataset DistillationIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
Shiye Lei
Dacheng Tao
DD
270
123
0
13 Jan 2023
Data Distillation: A Survey
Data Distillation: A Survey
Noveen Sachdeva
Julian McAuley
DD
221
88
0
11 Jan 2023
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
Scaling Up Dataset Distillation to ImageNet-1K with Constant MemoryInternational Conference on Machine Learning (ICML), 2022
Justin Cui
Ruochen Wang
Si Si
Cho-Jui Hsieh
DD
258
172
0
19 Nov 2022
1