ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1604.01946
  4. Cited By
Optimizing Performance of Recurrent Neural Networks on GPUs

Optimizing Performance of Recurrent Neural Networks on GPUs

7 April 2016
J. Appleyard
Tomás Kociský
Phil Blunsom
ArXivPDFHTML

Papers citing "Optimizing Performance of Recurrent Neural Networks on GPUs"

15 / 15 papers shown
Title
Simple Recurrence Improves Masked Language Models
Simple Recurrence Improves Masked Language Models
Tao Lei
Ran Tian
Jasmijn Bastings
Ankur P. Parikh
85
4
0
23 May 2022
Privacy-preserving Federated Learning for Residential Short Term Load
  Forecasting
Privacy-preserving Federated Learning for Residential Short Term Load Forecasting
Joaquín Delgado Fernández
Sergio Potenciano Menci
Chul Min Lee
Gilbert Fridgen
33
53
0
17 Nov 2021
A Distributed Deep Reinforcement Learning Technique for Application
  Placement in Edge and Fog Computing Environments
A Distributed Deep Reinforcement Learning Technique for Application Placement in Edge and Fog Computing Environments
M. Goudarzi
M. Palaniswami
Rajkumar Buyya
OffRL
35
85
0
24 Oct 2021
FusionStitching: Boosting Memory Intensive Computations for Deep
  Learning Workloads
FusionStitching: Boosting Memory Intensive Computations for Deep Learning Workloads
Zhen Zheng
Pengzhan Zhao
Guoping Long
Feiwen Zhu
Kai Zhu
Wenyi Zhao
Lansong Diao
Jun Yang
Wei Lin
14
29
0
23 Sep 2020
SMILES-X: autonomous molecular compounds characterization for small
  datasets without descriptors
SMILES-X: autonomous molecular compounds characterization for small datasets without descriptors
G. Lambard
Ekaterina Gracheva
27
20
0
20 Jun 2019
A Lightweight Recurrent Network for Sequence Modeling
A Lightweight Recurrent Network for Sequence Modeling
Biao Zhang
Rico Sennrich
27
7
0
30 May 2019
Scheduling Computation Graphs of Deep Learning Models on Manycore CPUs
Scheduling Computation Graphs of Deep Learning Models on Manycore CPUs
Linpeng Tang
Yida Wang
Theodore L. Willke
Kai Li
GNN
21
22
0
16 Jul 2018
LSTM Benchmarks for Deep Learning Frameworks
LSTM Benchmarks for Deep Learning Frameworks
Stefan Braun
28
28
0
05 Jun 2018
Echo: Compiler-based GPU Memory Footprint Reduction for LSTM RNN
  Training
Echo: Compiler-based GPU Memory Footprint Reduction for LSTM RNN Training
Bojian Zheng
Abhishek Tiwari
Nandita Vijaykumar
Gennady Pekhimenko
27
44
0
22 May 2018
Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip
Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip
Feiwen Zhu
Jeff Pool
M. Andersch
J. Appleyard
Fung Xie
19
29
0
26 Apr 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
702
0
26 Feb 2018
A Scalable Near-Memory Architecture for Training Deep Neural Networks on
  Large In-Memory Datasets
A Scalable Near-Memory Architecture for Training Deep Neural Networks on Large In-Memory Datasets
Fabian Schuiki
Michael Schaffner
Frank K. Gürkaynak
Luca Benini
31
70
0
19 Feb 2018
IMPALA: Scalable Distributed Deep-RL with Importance Weighted
  Actor-Learner Architectures
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
L. Espeholt
Hubert Soyer
Rémi Munos
Karen Simonyan
Volodymyr Mnih
...
Vlad Firoiu
Tim Harley
Iain Dunning
Shane Legg
Koray Kavukcuoglu
18
1,574
0
05 Feb 2018
E-PUR: An Energy-Efficient Processing Unit for Recurrent Neural Networks
E-PUR: An Energy-Efficient Processing Unit for Recurrent Neural Networks
Franyell Silfa
Gem Dot
J. Arnau
Antonio González
33
39
0
20 Nov 2017
MobiRNN: Efficient Recurrent Neural Network Execution on Mobile GPU
MobiRNN: Efficient Recurrent Neural Network Execution on Mobile GPU
Qingqing Cao
Niranjan Balasubramanian
A. Balasubramanian
20
61
0
03 Jun 2017
1