ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.00307
  4. Cited By
Mini-batch Serialization: CNN Training with Inter-layer Data Reuse

Mini-batch Serialization: CNN Training with Inter-layer Data Reuse

30 September 2018
Sangkug Lym
Armand Behroozi
W. Wen
Ge Li
Yongkee Kwon
M. Erez
ArXivPDFHTML

Papers citing "Mini-batch Serialization: CNN Training with Inter-layer Data Reuse"

5 / 5 papers shown
Title
Slice-and-Forge: Making Better Use of Caches for Graph Convolutional
  Network Accelerators
Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators
Min-hee Yoo
Jaeyong Song
Hyeyoon Lee
Jounghoo Lee
Namhyung Kim
Youngsok Kim
Jinho Lee
GNN
48
5
0
24 Jan 2023
EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network
  Accelerators
EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network Accelerators
Lois Orosa
Skanda Koppula
Yaman Umuroglu
Konstantinos Kanellopoulos
Juan Gómez Luna
Michaela Blott
K. Vissers
O. Mutlu
46
4
0
04 Feb 2022
Rethinking "Batch" in BatchNorm
Rethinking "Batch" in BatchNorm
Yuxin Wu
Justin Johnson
BDL
43
66
0
17 May 2021
GradPIM: A Practical Processing-in-DRAM Architecture for Gradient
  Descent
GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent
Heesu Kim
Hanmin Park
Taehyun Kim
Kwanheum Cho
Eojin Lee
Soojung Ryu
Hyuk-Jae Lee
Kiyoung Choi
Jinho Lee
24
36
0
15 Feb 2021
FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN
  Model Training
FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN Model Training
Sangkug Lym
M. Erez
21
25
0
27 Apr 2020
1