6
0

DiffusionBlocks: Blockwise Training for Generative Models via Score-Based Diffusion

Main:7 Pages
3 Figures
Bibliography:4 Pages
7 Tables
Appendix:2 Pages
Abstract

Training large neural networks with end-to-end backpropagation creates significant memory bottlenecks, limiting accessibility to state-of-the-art AI research. We propose DiffusionBlocks\textit{DiffusionBlocks}, a novel training framework that interprets neural network blocks as performing denoising operations in a continuous-time diffusion process. By partitioning the network into independently trainable blocks and optimizing noise level assignments based on equal cumulative probability mass, our approach achieves significant memory efficiency while maintaining competitive performance compared to traditional backpropagation in generative tasks. Experiments on image generation and language modeling tasks demonstrate memory reduction proportional to the number of blocks while achieving superior performance. DiffusionBlocks provides a promising pathway for democratizing access to large-scale neural network training with limited computational resources.

View on arXiv
@article{shing2025_2506.14202,
  title={ DiffusionBlocks: Blockwise Training for Generative Models via Score-Based Diffusion },
  author={ Makoto Shing and Takuya Akiba },
  journal={arXiv preprint arXiv:2506.14202},
  year={ 2025 }
}
Comments on this paper