ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.01427
  4. Cited By
Sparsely-Connected Neural Networks: Towards Efficient VLSI
  Implementation of Deep Neural Networks

Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks

4 November 2016
A. Ardakani
C. Condo
W. Gross
ArXivPDFHTML

Papers citing "Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks"

13 / 13 papers shown
Title
D-Score: A Synapse-Inspired Approach for Filter Pruning
D-Score: A Synapse-Inspired Approach for Filter Pruning
Doyoung Park
Jinsoo Kim
Ji-Min Nam
Jooyoung Chang
S. Park
22
0
0
08 Aug 2023
Training Deep Boltzmann Networks with Sparse Ising Machines
Training Deep Boltzmann Networks with Sparse Ising Machines
Shaila Niazi
Navid Anjum Aadit
Masoud Mohseni
S. Chowdhury
Yao Qin
Kerem Y Çamsarı
AI4CE
32
26
0
19 Mar 2023
Boosting Robustness Verification of Semantic Feature Neighborhoods
Boosting Robustness Verification of Semantic Feature Neighborhoods
Anan Kabaha
Dana Drachsler-Cohen
AAML
34
6
0
12 Sep 2022
Deep learning via message passing algorithms based on belief propagation
Deep learning via message passing algorithms based on belief propagation
C. Lucibello
Fabrizio Pittorino
Gabriele Perugini
R. Zecchina
43
14
0
27 Oct 2021
Learning Gradual Argumentation Frameworks using Genetic Algorithms
Learning Gradual Argumentation Frameworks using Genetic Algorithms
J. Spieler
Nico Potyka
Steffen Staab
AI4CE
36
4
0
25 Jun 2021
Deep Neural Networks using a Single Neuron: Folded-in-Time Architecture
  using Feedback-Modulated Delay Loops
Deep Neural Networks using a Single Neuron: Folded-in-Time Architecture using Feedback-Modulated Delay Loops
Florian Stelzer
André Röhm
Raul Vicente
Ingo Fischer
University of Tartu
AI4CE
19
46
0
19 Nov 2020
Attention Based Pruning for Shift Networks
Attention Based Pruning for Shift Networks
G. B. Hacene
Carlos Lassance
Vincent Gripon
Matthieu Courbariaux
Yoshua Bengio
41
25
0
29 May 2019
Quantized Guided Pruning for Efficient Hardware Implementations of
  Convolutional Neural Networks
Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks
G. B. Hacene
Vincent Gripon
M. Arzel
Nicolas Farrugia
Yoshua Bengio
MQ
6
14
0
29 Dec 2018
Penetrating the Fog: the Path to Efficient CNN Models
Penetrating the Fog: the Path to Efficient CNN Models
Kun Wan
Boyuan Feng
Shu Yang
Yufei Ding
25
0
0
09 Oct 2018
Dynamic Sparse Graph for Efficient Deep Learning
Dynamic Sparse Graph for Efficient Deep Learning
L. Liu
Lei Deng
Xing Hu
Maohua Zhu
Guoqi Li
Yufei Ding
Yuan Xie
GNN
37
42
0
01 Oct 2018
Learning Recurrent Binary/Ternary Weights
Learning Recurrent Binary/Ternary Weights
A. Ardakani
Zhengyun Ji
S. C. Smithson
B. Meyer
W. Gross
MQ
14
27
0
28 Sep 2018
Attention-Based Guided Structured Sparsity of Deep Neural Networks
Attention-Based Guided Structured Sparsity of Deep Neural Networks
A. Torfi
Rouzbeh A. Shirvani
Sobhan Soleymani
Nasser M. Nasrabadi
23
23
0
13 Feb 2018
Compressing Low Precision Deep Neural Networks Using Sparsity-Induced
  Regularization in Ternary Networks
Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Julian Faraone
Nicholas J. Fraser
Giulio Gambardella
Michaela Blott
Philip H. W. Leong
MQ
UQCV
26
12
0
19 Sep 2017
1