Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1807.02037
Cited By
TFLMS: Large Model Support in TensorFlow by Graph Rewriting
5 July 2018
Tung D. Le
Haruki Imai
Yasushi Negishi
K. Kawachiya
GNN
Re-assign community
ArXiv
PDF
HTML
Papers citing
"TFLMS: Large Model Support in TensorFlow by Graph Rewriting"
7 / 7 papers shown
Title
GPU Memory Usage Optimization for Backward Propagation in Deep Network Training
Ding-Yong Hong
Tzu-Hsien Tsai
Ning Wang
Pangfeng Liu
Jan-Jan Wu
44
0
0
18 Feb 2025
ProTrain: Efficient LLM Training via Memory-Aware Techniques
Hanmei Yang
Jin Zhou
Yao Fu
Xiaoqun Wang
Ramine Roane
Hui Guan
Tongping Liu
VLM
36
0
0
12 Jun 2024
Systems for Parallel and Distributed Large-Model Deep Learning Training
Kabir Nagrecha
GNN
VLM
MoE
26
7
0
06 Jan 2023
Survey on Large Scale Neural Network Training
Julia Gusak
Daria Cherniuk
Alena Shilova
A. Katrutsa
Daniel Bershatsky
...
Lionel Eyraud-Dubois
Oleg Shlyazhko
Denis Dimitrov
Ivan Oseledets
Olivier Beaumont
22
10
0
21 Feb 2022
AccUDNN: A GPU Memory Efficient Accelerator for Training Ultra-deep Neural Networks
Jinrong Guo
Wantao Liu
Wang Wang
Q. Lu
Songlin Hu
Jizhong Han
Ruixuan Li
11
9
0
21 Jan 2019
Data-parallel distributed training of very large models beyond GPU capacity
Samuel Matzek
M. Grossman
Minsik Cho
Anar Yusifov
Bryant Nelson
A. Juneja
GNN
22
3
0
29 Nov 2018
Universal Deep Neural Network Compression
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
MQ
86
85
0
07 Feb 2018
1