ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.08259
  4. Cited By
Dataset Condensation with Differentiable Siamese Augmentation

Dataset Condensation with Differentiable Siamese Augmentation

16 February 2021
Bo-Lu Zhao
Hakan Bilen
    DD
ArXivPDFHTML

Papers citing "Dataset Condensation with Differentiable Siamese Augmentation"

20 / 70 papers shown
Title
Meta Knowledge Condensation for Federated Learning
Meta Knowledge Condensation for Federated Learning
Ping Liu
Xin Yu
Qiufeng Wang
DD
FedML
30
28
0
29 Sep 2022
Compressed Gastric Image Generation Based on Soft-Label Dataset
  Distillation for Medical Data Sharing
Compressed Gastric Image Generation Based on Soft-Label Dataset Distillation for Medical Data Sharing
Guang Li
Ren Togo
Takahiro Ogawa
Miki Haseyama
DD
32
40
0
29 Sep 2022
Dataset Condensation with Latent Space Knowledge Factorization and
  Sharing
Dataset Condensation with Latent Space Knowledge Factorization and Sharing
Haebeom Lee
Dong Bok Lee
Sung Ju Hwang
DD
21
37
0
21 Aug 2022
FedDM: Iterative Distribution Matching for Communication-Efficient
  Federated Learning
FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
Yuanhao Xiong
Ruochen Wang
Minhao Cheng
Felix X. Yu
Cho-Jui Hsieh
FedML
DD
50
82
0
20 Jul 2022
DC-BENCH: Dataset Condensation Benchmark
DC-BENCH: Dataset Condensation Benchmark
Justin Cui
Ruochen Wang
Si Si
Cho-Jui Hsieh
DD
40
72
0
20 Jul 2022
Condensing Graphs via One-Step Gradient Matching
Condensing Graphs via One-Step Gradient Matching
Wei Jin
Xianfeng Tang
Haoming Jiang
Zheng Li
Danqing Zhang
Jiliang Tang
Bin Ying
DD
31
98
0
15 Jun 2022
Remember the Past: Distilling Datasets into Addressable Memories for
  Neural Networks
Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks
Zhiwei Deng
Olga Russakovsky
FedML
DD
41
92
0
06 Jun 2022
Infinite Recommendation Networks: A Data-Centric Approach
Infinite Recommendation Networks: A Data-Centric Approach
Noveen Sachdeva
Mehak Preet Dhaliwal
Carole-Jean Wu
Julian McAuley
DD
33
28
0
03 Jun 2022
Dataset Distillation using Neural Feature Regression
Dataset Distillation using Neural Feature Regression
Yongchao Zhou
E. Nezhadarya
Jimmy Ba
DD
FedML
41
149
0
01 Jun 2022
Efficient Scheduling of Data Augmentation for Deep Reinforcement
  Learning
Efficient Scheduling of Data Augmentation for Deep Reinforcement Learning
Byungchan Ko
Jungseul Ok
OnRL
18
5
0
01 Jun 2022
Privacy for Free: How does Dataset Condensation Help Privacy?
Privacy for Free: How does Dataset Condensation Help Privacy?
Tian Dong
Bo-Lu Zhao
Lingjuan Lyu
DD
24
113
0
01 Jun 2022
Synthesizing Informative Training Samples with GAN
Synthesizing Informative Training Samples with GAN
Bo-Lu Zhao
Hakan Bilen
DD
37
74
0
15 Apr 2022
Data-Centric Green AI: An Exploratory Empirical Study
Data-Centric Green AI: An Exploratory Empirical Study
Roberto Verdecchia
Luís Cruz
June Sallou
Michelle Lin
James Wickenden
Estelle Hotellier
22
40
0
06 Apr 2022
Generalizing Few-Shot NAS with Gradient Matching
Generalizing Few-Shot NAS with Gradient Matching
Shou-Yong Hu
Ruochen Wang
Lanqing Hong
Zhenguo Li
Cho-Jui Hsieh
Jiashi Feng
25
23
0
29 Mar 2022
Dataset Distillation by Matching Training Trajectories
Dataset Distillation by Matching Training Trajectories
George Cazenavette
Tongzhou Wang
Antonio Torralba
Alexei A. Efros
Jun-Yan Zhu
FedML
DD
76
363
0
22 Mar 2022
When less is more: Simplifying inputs aids neural network understanding
When less is more: Simplifying inputs aids neural network understanding
R. Schirrmeister
Rosanne Liu
Sara Hooker
T. Ball
24
5
0
14 Jan 2022
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from
  a Single Image
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single Image
Yuki M. Asano
Aaqib Saeed
43
7
0
01 Dec 2021
Graph Condensation for Graph Neural Networks
Graph Condensation for Graph Neural Networks
Wei Jin
Lingxiao Zhao
Shichang Zhang
Yozen Liu
Jiliang Tang
Neil Shah
DD
AI4CE
27
147
0
14 Oct 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
49
229
0
27 Jul 2021
Distilled Replay: Overcoming Forgetting through Synthetic Samples
Distilled Replay: Overcoming Forgetting through Synthetic Samples
Andrea Rosasco
Antonio Carta
Andrea Cossu
Vincenzo Lomonaco
D. Bacciu
DD
19
47
0
29 Mar 2021
Previous
12