ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.06367
  4. Cited By
Efficient Sparse-Winograd Convolutional Neural Networks

Efficient Sparse-Winograd Convolutional Neural Networks

18 February 2018
Xingyu Liu
Jeff Pool
Song Han
W. Dally
ArXivPDFHTML

Papers citing "Efficient Sparse-Winograd Convolutional Neural Networks"

20 / 20 papers shown
Title
YFlows: Systematic Dataflow Exploration and Code Generation for
  Efficient Neural Network Inference using SIMD Architectures on CPUs
YFlows: Systematic Dataflow Exploration and Code Generation for Efficient Neural Network Inference using SIMD Architectures on CPUs
Cyrus Zhou
Zack Hassman
Ruize Xu
Dhirpal Shah
Vaughn Richard
Yanjing Li
37
1
0
01 Oct 2023
Exploring Winograd Convolution for Cost-effective Neural Network Fault
  Tolerance
Exploring Winograd Convolution for Cost-effective Neural Network Fault Tolerance
Xing-xiong Xue
Cheng Liu
Bo Liu
Haitong Huang
Ying Wang
Yaoyu Zhang
Lei Zhang
Huawei Li
Xiaowei Li
48
7
0
16 Aug 2023
Accelerating CNN inference on long vector architectures via co-design
Accelerating CNN inference on long vector architectures via co-design
Sonia Rani Gupta
Nikela Papadopoulou
Miquel Pericàs
3DV
18
4
0
22 Dec 2022
BiViT: Extremely Compressed Binary Vision Transformer
BiViT: Extremely Compressed Binary Vision Transformer
Yefei He
Zhenyu Lou
Luoming Zhang
Jing Liu
Weijia Wu
Hong Zhou
Bohan Zhuang
ViT
MQ
31
28
0
14 Nov 2022
Low-Energy Convolutional Neural Networks (CNNs) using Hadamard Method
Low-Energy Convolutional Neural Networks (CNNs) using Hadamard Method
Varun Mannam
15
0
0
06 Sep 2022
Estimating the Power Consumption of Heterogeneous Devices when
  performing AI Inference
Estimating the Power Consumption of Heterogeneous Devices when performing AI Inference
P. Machado
Ivica Matic
Francisco de Lemos
I. Ihianle
D. Adama
22
3
0
13 Jul 2022
EfficientFormer: Vision Transformers at MobileNet Speed
EfficientFormer: Vision Transformers at MobileNet Speed
Yanyu Li
Geng Yuan
Yang Wen
Eric Hu
Georgios Evangelidis
Sergey Tulyakov
Yanzhi Wang
Jian Ren
ViT
31
349
0
02 Jun 2022
Winograd Convolution: A Perspective from Fault Tolerance
Winograd Convolution: A Perspective from Fault Tolerance
Xing-xiong Xue
Haitong Huang
Cheng Liu
Ying Wang
Yaoyu Zhang
Lefei Zhang
53
13
0
17 Feb 2022
EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network
  Accelerators
EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network Accelerators
Lois Orosa
Skanda Koppula
Yaman Umuroglu
Konstantinos Kanellopoulos
Juan Gómez Luna
Michaela Blott
K. Vissers
O. Mutlu
48
4
0
04 Feb 2022
Dual-side Sparse Tensor Core
Dual-side Sparse Tensor Core
Yang-Feng Wang
Chen Zhang
Zhiqiang Xie
Cong Guo
Yunxin Liu
Jingwen Leng
30
75
0
20 May 2021
Searching for Fast Model Families on Datacenter Accelerators
Searching for Fast Model Families on Datacenter Accelerators
Sheng Li
Mingxing Tan
Ruoming Pang
Andrew Li
Liqun Cheng
Quoc V. Le
N. Jouppi
44
34
0
10 Feb 2021
Efficient Residue Number System Based Winograd Convolution
Efficient Residue Number System Based Winograd Convolution
Zhi-Gang Liu
Matthew Mattina
45
12
0
23 Jul 2020
Efficient Crowd Counting via Structured Knowledge Transfer
Efficient Crowd Counting via Structured Knowledge Transfer
Lingbo Liu
Jiaqi Chen
Hefeng Wu
Tianshui Chen
Guanbin Li
Liang Lin
29
64
0
23 Mar 2020
LANCE: Efficient Low-Precision Quantized Winograd Convolution for Neural
  Networks Based on Graphics Processing Units
LANCE: Efficient Low-Precision Quantized Winograd Convolution for Neural Networks Based on Graphics Processing Units
Guangli Li
Lei Liu
Xueying Wang
Xiu Ma
Xiaobing Feng
MQ
21
18
0
19 Mar 2020
How Does BN Increase Collapsed Neural Network Filters?
How Does BN Increase Collapsed Neural Network Filters?
Sheng Zhou
Xinjiang Wang
Ping Luo
Xue Jiang
Wenjie Li
Wei Zhang
21
1
0
30 Jan 2020
Single-shot Channel Pruning Based on Alternating Direction Method of
  Multipliers
Single-shot Channel Pruning Based on Alternating Direction Method of Multipliers
Chengcheng Li
Zehao Wang
Xiangyang Wang
Hairong Qi
19
5
0
18 Feb 2019
Efficient Winograd Convolution via Integer Arithmetic
Efficient Winograd Convolution via Integer Arithmetic
Lingchuan Meng
J. Brothers
24
29
0
07 Jan 2019
Universal Approximation with Quadratic Deep Networks
Universal Approximation with Quadratic Deep Networks
Fenglei Fan
Jinjun Xiong
Ge Wang
PINN
38
78
0
31 Jul 2018
Hypertree Decompositions Revisited for PGMs
Hypertree Decompositions Revisited for PGMs
A. S. Arun
Sai Vikneshwar Mani Jayaraman
Christopher Ré
Atri Rudra
TPM
24
0
0
02 Jul 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
704
0
26 Feb 2018
1