Efficient and Robust Parallel DNN Training through Model Parallelism on
Multi-GPU Platform
The training process of Deep Neural Network (DNN) is compute-intensive, often taking days to weeks to train a DNN model. Therefore, parallel execution of DNN training on GPUs is a widely adopted approach to speed up process nowadays. Due to the implementation simplicity, data parallelism is currently the most commonly used parallelization method. Nonetheless, data parallelism suffers from excessive inter-GPU communication overhead due to frequent weight synchronization among GPUs. Another approach is model parallelism, which partitions model among GPUs. This approach can significantly reduce inter-GPU communication cost compared to data parallelism. However, model parallelism faces the staleness issue; that is, gradients are computed with stale weights, leading to training instability and accuracy loss. In this paper, we propose a novel staleness mitigating method, which resolves the staleness issue with weight prediction. The experimental results show that the proposed weight prediction method is effective in resolving the staleness problem for model parallelism, achieving almost the same accuracy as data parallelism.
View on arXiv