LIVEJoin the current RTAI Connect sessionJoin now

Remember the Past: Distilling Datasets into Addressable Memories for
  Neural Networks
v1v2 (latest)

Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks

Neural Information Processing Systems (NeurIPS), 2022

Papers citing "Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks"

38 / 38 papers shown
Title
Distilling Dataset into Neural FieldInternational Conference on Learning Representations (ICLR), 2025
234
3
0
05 Mar 2025
A Label is Worth a Thousand Images in Dataset Distillation
A Label is Worth a Thousand Images in Dataset DistillationNeural Information Processing Systems (NeurIPS), 2024
298
17
0
15 Jun 2024
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero CostInternational Conference on Learning Representations (ICLR), 2024
207
4
0
23 May 2024
Efficient Dataset Distillation via Minimax Diffusion
Efficient Dataset Distillation via Minimax DiffusionComputer Vision and Pattern Recognition (CVPR), 2023
Jianyang Gu
Saeed Vahidian
Vyacheslav Kungurtsev
Haonan Wang
Wei Jiang
Yang You
Yiran Chen
148
54
0
27 Nov 2023
Frequency Domain-based Dataset Distillation
Frequency Domain-based Dataset DistillationNeural Information Processing Systems (NeurIPS), 2023
158
26
0
15 Nov 2023
Data Optimization in Deep Learning: A Survey
Data Optimization in Deep Learning: A SurveyIEEE Transactions on Knowledge and Data Engineering (TKDE), 2023
201
4
0
25 Oct 2023
Rethinking Data Distillation: Do Not Overlook Calibration
Rethinking Data Distillation: Do Not Overlook CalibrationIEEE International Conference on Computer Vision (ICCV), 2023
189
16
0
24 Jul 2023
The Importance of Robust Features in Mitigating Catastrophic Forgetting
The Importance of Robust Features in Mitigating Catastrophic ForgettingInternational Symposium on Computers and Communications (ISCC), 2023
117
9
0
29 Jun 2023
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Repeated Random Sampling for Minimizing the Time-to-Accuracy of LearningInternational Conference on Learning Representations (ICLR), 2023
169
20
0
28 May 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive ReviewIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
266
153
0
17 Jan 2023
A Comprehensive Survey of Dataset Distillation
A Comprehensive Survey of Dataset DistillationIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
262
122
0
13 Jan 2023
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
Scaling Up Dataset Distillation to ImageNet-1K with Constant MemoryInternational Conference on Machine Learning (ICML), 2022
238
172
0
19 Nov 2022

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.