ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.16774
  4. Cited By
Dataset Distillation via Factorization

Dataset Distillation via Factorization

30 October 2022
Songhua Liu
Kai Wang
Xingyi Yang
Jingwen Ye
Xinchao Wang
    DD
ArXivPDFHTML

Papers citing "Dataset Distillation via Factorization"

28 / 28 papers shown
Title
Leveraging Multi-Modal Information to Enhance Dataset Distillation
Leveraging Multi-Modal Information to Enhance Dataset Distillation
Zhe Li
Hadrien Reynaud
Bernhard Kainz
DD
45
0
0
13 May 2025
Video Dataset Condensation with Diffusion Models
Video Dataset Condensation with Diffusion Models
Zhe Li
Hadrien Reynaud
Mischa Dombrowski
Sarah Cechnicka
Franciskus Xaverius Erick
Bernhard Kainz
DD
VGen
52
0
0
10 May 2025
Dataset Distillation with Probabilistic Latent Features
Dataset Distillation with Probabilistic Latent Features
Zhe Li
Sarah Cechnicka
Cheng Ouyang
Katharina Breininger
Peter Schüffler
Bernhard Kainz
DD
47
0
0
10 May 2025
A Large-Scale Study on Video Action Dataset Condensation
A Large-Scale Study on Video Action Dataset Condensation
Yang Chen
Sheng Guo
Bo Zheng
Limin Wang
DD
81
2
0
13 Mar 2025
Dataset Distillation via Committee Voting
Dataset Distillation via Committee Voting
Jiacheng Cui
Zhaoyi Li
Xiaochen Ma
Xinyue Bi
Yaxin Luo
Zhiqiang Shen
DD
FedML
51
1
0
13 Jan 2025
Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Kai Wang
Zekai Li
Zhi-Qi Cheng
Samir Khaki
A. Sajedi
Ramakrishna Vedantam
Konstantinos N. Plataniotis
Alexander G. Hauptmann
Yang You
DD
67
4
0
22 Oct 2024
Teddy: Efficient Large-Scale Dataset Distillation via
  Taylor-Approximated Matching
Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching
Ruonan Yu
Songhua Liu
Jingwen Ye
Xinchao Wang
DD
34
4
0
10 Oct 2024
Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator
Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator
Xin Zhang
Jiawei Du
Ping Liu
Joey Tianyi Zhou
DD
50
2
0
13 Aug 2024
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost
Xinyi Shang
Peng Sun
Tao Lin
50
2
0
23 May 2024
Distilled Datamodel with Reverse Gradient Matching
Distilled Datamodel with Reverse Gradient Matching
Jingwen Ye
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
41
3
0
22 Apr 2024
Navigating Complexity: Toward Lossless Graph Condensation via Expanding
  Window Matching
Navigating Complexity: Toward Lossless Graph Condensation via Expanding Window Matching
Yuchen Zhang
Tianle Zhang
Kai Wang
Ziyao Guo
Yuxuan Liang
Xavier Bresson
Wei Jin
Yang You
34
23
0
07 Feb 2024
Dancing with Still Images: Video Distillation via Static-Dynamic
  Disentanglement
Dancing with Still Images: Video Distillation via Static-Dynamic Disentanglement
Ziyu Wang
Yue Xu
Cewu Lu
Yong-Lu Li
DD
33
8
0
01 Dec 2023
Frequency Domain-based Dataset Distillation
Frequency Domain-based Dataset Distillation
DongHyeok Shin
Seungjae Shin
Il-Chul Moon
DD
35
19
0
15 Nov 2023
AST: Effective Dataset Distillation through Alignment with Smooth and
  High-Quality Expert Trajectories
AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert Trajectories
Jiyuan Shen
Wenzhuo Yang
Kwok-Yan Lam
DD
29
1
0
16 Oct 2023
Graph Distillation with Eigenbasis Matching
Graph Distillation with Eigenbasis Matching
Yang Liu
Deyu Bo
Chuan Shi
DD
26
9
0
13 Oct 2023
Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer
  with Mixture-of-View-Experts
Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts
Wenyan Cong
Hanxue Liang
Peihao Wang
Zhiwen Fan
Tianlong Chen
M. Varma
Yi Wang
Zhangyang Wang
MoE
27
21
0
22 Aug 2023
Robust Mixture-of-Expert Training for Convolutional Neural Networks
Robust Mixture-of-Expert Training for Convolutional Neural Networks
Yihua Zhang
Ruisi Cai
Tianlong Chen
Guanhua Zhang
Huan Zhang
Pin-Yu Chen
Shiyu Chang
Zhangyang Wang
Sijia Liu
MoE
AAML
OOD
34
16
0
19 Aug 2023
A Comprehensive Study on Dataset Distillation: Performance, Privacy,
  Robustness and Fairness
A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness
Zongxiong Chen
Jiahui Geng
Derui Zhu
Herbert Woisetschlaeger
Qing Li
Sonja Schimmler
Ruben Mayer
Chunming Rong
DD
26
9
0
05 May 2023
Propheter: Prophetic Teacher Guided Long-Tailed Distribution Learning
Propheter: Prophetic Teacher Guided Long-Tailed Distribution Learning
Wenxiang Xu
Lin Chen
Linyun Zhou
Jie Lei
Lechao Cheng
Zunlei Feng
Min-Gyoo Song
26
1
0
09 Apr 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
53
121
0
17 Jan 2023
A Comprehensive Survey of Dataset Distillation
A Comprehensive Survey of Dataset Distillation
Shiye Lei
Dacheng Tao
DD
31
87
0
13 Jan 2023
Data Distillation: A Survey
Data Distillation: A Survey
Noveen Sachdeva
Julian McAuley
DD
45
73
0
11 Jan 2023
TinyMIM: An Empirical Study of Distilling MIM Pre-trained Models
TinyMIM: An Empirical Study of Distilling MIM Pre-trained Models
Sucheng Ren
Fangyun Wei
Zheng-Wei Zhang
Han Hu
40
34
0
03 Jan 2023
Dataset Condensation via Efficient Synthetic-Data Parameterization
Dataset Condensation via Efficient Synthetic-Data Parameterization
Jang-Hyun Kim
Jinuk Kim
Seong Joon Oh
Sangdoo Yun
Hwanjun Song
Joonhyun Jeong
Jung-Woo Ha
Hyun Oh Song
DD
393
158
0
30 May 2022
Dataset Condensation with Differentiable Siamese Augmentation
Dataset Condensation with Differentiable Siamese Augmentation
Bo-Lu Zhao
Hakan Bilen
DD
199
288
0
16 Feb 2021
Overcoming Catastrophic Forgetting in Graph Neural Networks
Overcoming Catastrophic Forgetting in Graph Neural Networks
Huihui Liu
Yiding Yang
Xinchao Wang
161
112
0
10 Dec 2020
Distilling Knowledge from Graph Convolutional Networks
Distilling Knowledge from Graph Convolutional Networks
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
160
226
0
23 Mar 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
338
11,684
0
09 Mar 2017
1