ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.02297
19
0

Reducing Data Motion to Accelerate the Training of Deep Neural Networks

5 April 2020
Sicong Zhuang
C. Malossi
Marc Casas
ArXivPDFHTML
Abstract

This paper reduces the cost of DNNs training by decreasing the amount of data movement across heterogeneous architectures composed of several GPUs and multicore CPU devices. In particular, this paper proposes an algorithm to dynamically adapt the data representation format of network weights during training. This algorithm drives a compression procedure that reduces data size before sending them over the parallel system. We run an extensive evaluation campaign considering several up-to-date deep neural network models and two high-end parallel architectures composed of multiple GPUs and CPU multicore chips. Our solution achieves average performance improvements from 6.18\% up to 11.91\%.

View on arXiv
Comments on this paper