ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.06798
  4. Cited By
Skyline: Interactive In-Editor Computational Performance Profiling for
  Deep Neural Network Training

Skyline: Interactive In-Editor Computational Performance Profiling for Deep Neural Network Training

15 August 2020
Geoffrey X. Yu
Tovi Grossman
Gennady Pekhimenko
ArXivPDFHTML

Papers citing "Skyline: Interactive In-Editor Computational Performance Profiling for Deep Neural Network Training"

6 / 6 papers shown
Title
HawkEye: Statically and Accurately Profiling the Communication Cost of Models in Multi-party Learning
HawkEye: Statically and Accurately Profiling the Communication Cost of Models in Multi-party Learning
Wenqiang Ruan
Xin Lin
Ruisheng Zhou
Guopeng Lin
Shui Yu
Weili Han
52
0
0
16 Feb 2025
Guided Optimization for Image Processing Pipelines
Guided Optimization for Image Processing Pipelines
Yuka Ikarashi
Jonathan Ragan-Kelley
Tsukasa Fukusato
Jun Kato
Takeo Igarashi
16
9
0
27 Jul 2021
A Runtime-Based Computational Performance Predictor for Deep Neural
  Network Training
A Runtime-Based Computational Performance Predictor for Deep Neural Network Training
Geoffrey X. Yu
Yubo Gao
P. Golikov
Gennady Pekhimenko
3DH
30
67
0
31 Jan 2021
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
309
10,233
0
16 Nov 2016
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Zhiwen Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
718
6,748
0
26 Sep 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,892
0
15 Sep 2016
1