ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.11004
  4. Cited By
Minimizing the Accumulated Trajectory Error to Improve Dataset
  Distillation

Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation

20 November 2022
Jiawei Du
Yiding Jiang
Vincent Y. F. Tan
Qiufeng Wang
Haizhou Li
    DD
ArXivPDFHTML

Papers citing "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation"

25 / 25 papers shown
Title
Leveraging Multi-Modal Information to Enhance Dataset Distillation
Leveraging Multi-Modal Information to Enhance Dataset Distillation
Zhe Li
Hadrien Reynaud
Bernhard Kainz
DD
47
0
0
13 May 2025
Dataset Distillation with Probabilistic Latent Features
Dataset Distillation with Probabilistic Latent Features
Zhe Li
Sarah Cechnicka
Cheng Ouyang
Katharina Breininger
Peter Schüffler
Bernhard Kainz
DD
47
0
0
10 May 2025
Video Dataset Condensation with Diffusion Models
Video Dataset Condensation with Diffusion Models
Zhe Li
Hadrien Reynaud
Mischa Dombrowski
Sarah Cechnicka
Franciskus Xaverius Erick
Bernhard Kainz
DD
VGen
52
0
0
10 May 2025
When Dynamic Data Selection Meets Data Augmentation
When Dynamic Data Selection Meets Data Augmentation
Steve Yang
Peng Ye
Furao Shen
Dongzhan Zhou
34
0
0
02 May 2025
Dataset Distillation via Committee Voting
Dataset Distillation via Committee Voting
Jiacheng Cui
Zhaoyi Li
Xiaochen Ma
Xinyue Bi
Yaxin Luo
Zhiqiang Shen
DD
FedML
51
1
0
13 Jan 2025
Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Kai Wang
Zekai Li
Zhi-Qi Cheng
Samir Khaki
A. Sajedi
Ramakrishna Vedantam
Konstantinos N. Plataniotis
Alexander G. Hauptmann
Yang You
DD
70
4
0
22 Oct 2024
Teddy: Efficient Large-Scale Dataset Distillation via
  Taylor-Approximated Matching
Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching
Ruonan Yu
Songhua Liu
Jingwen Ye
Xinchao Wang
DD
40
4
0
10 Oct 2024
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
S. Joshi
Jiayi Ni
Baharan Mirzasoleiman
DD
72
2
0
03 Oct 2024
Distilling Long-tailed Datasets
Distilling Long-tailed Datasets
Zhenghao Zhao
Haoxuan Wang
Yuzhang Shang
Kai Wang
Yan Yan
DD
48
2
0
24 Aug 2024
Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator
Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator
Xin Zhang
Jiawei Du
Ping Liu
Joey Tianyi Zhou
DD
55
2
0
13 Aug 2024
A Label is Worth a Thousand Images in Dataset Distillation
A Label is Worth a Thousand Images in Dataset Distillation
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
94
10
0
15 Jun 2024
SelMatch: Effectively Scaling Up Dataset Distillation via
  Selection-Based Initialization and Partial Updates by Trajectory Matching
SelMatch: Effectively Scaling Up Dataset Distillation via Selection-Based Initialization and Partial Updates by Trajectory Matching
Yongmin Lee
Hye Won Chung
31
7
0
28 May 2024
ATOM: Attention Mixer for Efficient Dataset Distillation
ATOM: Attention Mixer for Efficient Dataset Distillation
Samir Khaki
A. Sajedi
Kai Wang
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
50
3
0
02 May 2024
Navigating Complexity: Toward Lossless Graph Condensation via Expanding
  Window Matching
Navigating Complexity: Toward Lossless Graph Condensation via Expanding Window Matching
Yuchen Zhang
Tianle Zhang
Kai Wang
Ziyao Guo
Keli Zhang
Xavier Bresson
Wei Jin
Yang You
37
23
0
07 Feb 2024
Group Distributionally Robust Dataset Distillation with Risk Minimization
Group Distributionally Robust Dataset Distillation with Risk Minimization
Saeed Vahidian
Mingyu Wang
Jianyang Gu
Vyacheslav Kungurtsev
Wei Jiang
Yiran Chen
OOD
DD
41
6
0
07 Feb 2024
Deductive Beam Search: Decoding Deducible Rationale for Chain-of-Thought
  Reasoning
Deductive Beam Search: Decoding Deducible Rationale for Chain-of-Thought Reasoning
Tinghui Zhu
Kai Zhang
Jian Xie
Yu-Chuan Su
LRM
28
15
0
31 Jan 2024
Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for
  Enhanced Dataset Pruning
Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for Enhanced Dataset Pruning
Xin Zhang
Jiawei Du
Yunsong Li
Weiying Xie
Qiufeng Wang
37
7
0
22 Nov 2023
AST: Effective Dataset Distillation through Alignment with Smooth and
  High-Quality Expert Trajectories
AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert Trajectories
Jiyuan Shen
Wenzhuo Yang
Kwok-Yan Lam
DD
33
1
0
16 Oct 2023
Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching
Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching
Tao Feng
Jie Zhang
Peizheng Wang
Zhijie Wang
Shengyuan Pang
DD
53
0
0
29 May 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
53
121
0
17 Jan 2023
A Comprehensive Survey of Dataset Distillation
A Comprehensive Survey of Dataset Distillation
Shiye Lei
Dacheng Tao
DD
31
88
0
13 Jan 2023
Dataset Distillation Using Parameter Pruning
Dataset Distillation Using Parameter Pruning
Guang Li
Ren Togo
Takahiro Ogawa
Miki Haseyama
DD
42
20
0
29 Sep 2022
Efficient Sharpness-aware Minimization for Improved Training of Neural
  Networks
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks
Jiawei Du
Hanshu Yan
Jiashi Feng
Qiufeng Wang
Liangli Zhen
Rick Siow Mong Goh
Vincent Y. F. Tan
AAML
113
132
0
07 Oct 2021
Dataset Condensation with Differentiable Siamese Augmentation
Dataset Condensation with Differentiable Siamese Augmentation
Bo Zhao
Hakan Bilen
DD
205
288
0
16 Feb 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
1