ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.00079
32
7
v1v2v3v4v5v6 (latest)

Optimal Gradient Checkpoint Search for Arbitrary Computation Graphs

31 July 2018
Jianwei Feng
Dong Huang
ArXiv (abs)PDFHTML
Abstract

Deep Neural Networks(DNNs) require huge GPU memory when training on modern image/video databases. Unfortunately, the GPU memory in off-the-shelf devices is always finite, which limits the image resolutions and batch sizes that could be used for better DNN performance. Existing approaches to alleviate memory issue include better GPUs, distributed computation and gradient checkpointing. Among them, gradient checkpointing is a favorable approach as it focuses on trading computation for memory and does not require any upgrades on hardware. In gradient checkpointing, during forward, only a subset of intermediate tensors are stored, which are called Gradient Checkpoints (GCPs). Then during backward, extra local forwards are conducted to compute the missing tensors. The total training memory cost becomes the sum of (1) the memory cost of the gradient checkpoints and (2) the maximum memory cost of local forwards. To achieve maximal memory cut-offs, one needs optimal algorithms to select GCPs. Existing gradient checkpointing approaches rely on either manual input of GCPs or heuristics-based GCP search on linear computation graphs (LCGs), and cannot apply to arbitrary computation graphs(ACGs). In this paper, we present theories and optimal algorithms on GCP selection that, for the first time, apply to ACGs and achieve maximal memory cut-offs. Extensive experiments show that our approach constantly outperforms existing approaches on LCGs, and can cut off up-to 80% of training memory with a moderate time overhead (around 40%) on LCG and ACG DNNs, such as Alexnet, VGG, Resnet, Densenet and Inception Net.

View on arXiv
Comments on this paper