ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.03129
  4. Cited By
SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for
  Large-Scale Deep Learning Systems

SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems

7 March 2019
Beidi Chen
Tharun Medini
James Farwell
Sameh Gobriel
Charlie Tai
Anshumali Shrivastava
ArXivPDFHTML

Papers citing "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems"

17 / 17 papers shown
Title
Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training
  Complexity
Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity
Mucong Ding
Tahseen Rabbani
Bang An
Evan Z Wang
Furong Huang
33
21
0
21 Jun 2024
Memory Mosaics
Memory Mosaics
Jianyu Zhang
Niklas Nolte
Ranajoy Sadhukhan
Beidi Chen
Léon Bottou
VLM
73
3
0
10 May 2024
ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse
  LLMs
ReLU2^22 Wins: Discovering Efficient Activation Functions for Sparse LLMs
Zhengyan Zhang
Yixin Song
Guanghui Yu
Xu Han
Yankai Lin
Chaojun Xiao
Chenyang Song
Zhiyuan Liu
Zeyu Mi
Maosong Sun
27
31
0
06 Feb 2024
Bypass Exponential Time Preprocessing: Fast Neural Network Training via
  Weight-Data Correlation Preprocessing
Bypass Exponential Time Preprocessing: Fast Neural Network Training via Weight-Data Correlation Preprocessing
Josh Alman
Jiehao Liang
Zhao Song
Ruizhe Zhang
Danyang Zhuo
87
31
0
25 Nov 2022
Sublinear Time Algorithm for Online Weighted Bipartite Matching
Sublinear Time Algorithm for Online Weighted Bipartite Matching
Han Hu
Zhao Song
Runzhou Tao
Zhaozhuo Xu
Junze Yin
Danyang Zhuo
29
7
0
05 Aug 2022
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic
  Time
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time
Zhao Song
Licheng Zhang
Ruizhe Zhang
32
64
0
14 Dec 2021
Pixelated Butterfly: Simple and Efficient Sparse training for Neural
  Network Models
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Tri Dao
Beidi Chen
Kaizhao Liang
Jiaming Yang
Zhao Song
Atri Rudra
Christopher Ré
33
75
0
30 Nov 2021
Breaking the Linear Iteration Cost Barrier for Some Well-known
  Conditional Gradient Methods Using MaxIP Data-structures
Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures
Anshumali Shrivastava
Zhao Song
Zhaozhuo Xu
27
28
0
30 Nov 2021
Adaptive Elastic Training for Sparse Deep Learning on Heterogeneous
  Multi-GPU Servers
Adaptive Elastic Training for Sparse Deep Learning on Heterogeneous Multi-GPU Servers
Yujing Ma
Florin Rusu
Kesheng Wu
A. Sim
46
3
0
13 Oct 2021
Does Preprocessing Help Training Over-parameterized Neural Networks?
Does Preprocessing Help Training Over-parameterized Neural Networks?
Zhao Song
Shuo Yang
Ruizhe Zhang
43
49
0
09 Oct 2021
M-ar-K-Fast Independent Component Analysis
M-ar-K-Fast Independent Component Analysis
Luca Parisi
30
0
0
17 Aug 2021
A High Throughput Parallel Hash Table on FPGA using XOR-based Memory
A High Throughput Parallel Hash Table on FPGA using XOR-based Memory
Ruizhi Zhang
Sasindu Wijeratne
Yang Yang
S. Kuppannagari
Viktor Prasanna
22
5
0
07 Aug 2021
Sublinear Least-Squares Value Iteration via Locality Sensitive Hashing
Sublinear Least-Squares Value Iteration via Locality Sensitive Hashing
Anshumali Shrivastava
Zhao Song
Zhaozhuo Xu
19
22
0
18 May 2021
A Survey on Large-scale Machine Learning
A Survey on Large-scale Machine Learning
Meng Wang
Weijie Fu
Xiangnan He
Shijie Hao
Xindong Wu
25
110
0
10 Aug 2020
Climbing the WOL: Training for Cheaper Inference
Climbing the WOL: Training for Cheaper Inference
Zichang Liu
Zhaozhuo Xu
A. Ji
Jonathan Li
Beidi Chen
Anshumali Shrivastava
TPM
24
7
0
02 Jul 2020
Faster Neural Network Training with Approximate Tensor Operations
Faster Neural Network Training with Approximate Tensor Operations
Menachem Adelman
Kfir Y. Levy
Ido Hakimi
M. Silberstein
31
26
0
21 May 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,896
0
15 Sep 2016
1