ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.09564
  4. Cited By
Dual-side Sparse Tensor Core

Dual-side Sparse Tensor Core

20 May 2021
Yang-Feng Wang
Chen Zhang
Zhiqiang Xie
Cong Guo
Yunxin Liu
Jingwen Leng
ArXivPDFHTML

Papers citing "Dual-side Sparse Tensor Core"

12 / 12 papers shown
Title
Ditto: Accelerating Diffusion Model via Temporal Value Similarity
Ditto: Accelerating Diffusion Model via Temporal Value Similarity
Sungbin Kim
Hyunwuk Lee
Wonho Cho
Mincheol Park
Won Woo Ro
58
1
0
20 Jan 2025
BitMoD: Bit-serial Mixture-of-Datatype LLM Acceleration
Yuzong Chen
Ahmed F. AbouElhamayed
Xilai Dai
Yang Wang
Marta Andronic
G. Constantinides
Mohamed S. Abdelfattah
MQ
108
1
0
18 Nov 2024
Dual sparse training framework: inducing activation map sparsity via
  Transformed $\ell1$ regularization
Dual sparse training framework: inducing activation map sparsity via Transformed ℓ1\ell1ℓ1 regularization
Xiaolong Yu
Cong Tian
44
0
0
30 May 2024
AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels
  on GPUs
AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs
Yangjie Zhou
Yaoxu Song
Jingwen Leng
Zihan Liu
Weihao Cui
Zhendong Zhang
Cong Guo
Quan Chen
Li-Wei Li
Minyi Guo
GNN
41
1
0
27 May 2023
SPADE: Sparse Pillar-based 3D Object Detection Accelerator for
  Autonomous Driving
SPADE: Sparse Pillar-based 3D Object Detection Accelerator for Autonomous Driving
Minjae Lee
Seongmin Park
Hyung-Se Kim
Minyong Yoon
Jangwhan Lee
Junwon Choi
Nam Sung Kim
Mingu Kang
Jungwook Choi
3DPC
26
4
0
12 May 2023
Slice-and-Forge: Making Better Use of Caches for Graph Convolutional
  Network Accelerators
Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators
Min-hee Yoo
Jaeyong Song
Hyeyoon Lee
Jounghoo Lee
Namhyung Kim
Youngsok Kim
Jinho Lee
GNN
36
5
0
24 Jan 2023
ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural
  Network Quantization
ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural Network Quantization
Cong Guo
Chen Zhang
Jingwen Leng
Zihan Liu
Fan Yang
Yun-Bo Liu
Minyi Guo
Yuhao Zhu
MQ
20
55
0
30 Aug 2022
Sparseloop: An Analytical Approach To Sparse Tensor Accelerator Modeling
Sparseloop: An Analytical Approach To Sparse Tensor Accelerator Modeling
Yannan Nellie Wu
Po-An Tsai
A. Parashar
Vivienne Sze
J. Emer
25
57
0
12 May 2022
Two Sparsities Are Better Than One: Unlocking the Performance Benefits
  of Sparse-Sparse Networks
Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks
Kevin Lee Hunter
Lawrence Spracklen
Subutai Ahmad
23
20
0
27 Dec 2021
SPA-GCN: Efficient and Flexible GCN Accelerator with an Application for
  Graph Similarity Computation
SPA-GCN: Efficient and Flexible GCN Accelerator with an Application for Graph Similarity Computation
Atefeh Sohrabizadeh
Yuze Chi
Jason Cong
GNN
29
1
0
10 Nov 2021
Characterizing and Demystifying the Implicit Convolution Algorithm on
  Commercial Matrix-Multiplication Accelerators
Characterizing and Demystifying the Implicit Convolution Algorithm on Commercial Matrix-Multiplication Accelerators
Yangjie Zhou
Mengtian Yang
Cong Guo
Jingwen Leng
Yun Liang
Quan Chen
M. Guo
Yuhao Zhu
32
33
0
08 Oct 2021
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,926
0
17 Aug 2015
1