ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.05416
  4. Cited By
A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN
  Implementation

A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation

24 November 2019
Geng Yuan
Xiaolong Ma
Sheng Lin
Zechao Li
Caiwen Ding
ArXiv (abs)PDFHTML

Papers citing "A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation"

10 / 10 papers shown
Title
An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight
  Pruning and Quantization Using ADMM
An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Geng Yuan
Xiaolong Ma
Caiwen Ding
Sheng Lin
Tianyun Zhang
Zeinab S. Jalali
Yilong Zhao
Li Jiang
S. Soundarajan
Yanzhi Wang
MQ
39
47
0
29 Aug 2019
AutoCompress: An Automatic DNN Structured Pruning Framework for
  Ultra-High Compression Rates
AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates
Ning Liu
Xiaolong Ma
Zhiyuan Xu
Yanzhi Wang
Jian Tang
Jieping Ye
77
186
0
06 Jul 2019
Discrimination-aware Channel Pruning for Deep Neural Networks
Discrimination-aware Channel Pruning for Deep Neural Networks
Zhuangwei Zhuang
Mingkui Tan
Bohan Zhuang
Jing Liu
Yong Guo
Qingyao Wu
Junzhou Huang
Jin-Hui Zhu
134
601
0
28 Oct 2018
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
40
1,475
0
11 Oct 2018
A Systematic DNN Weight Pruning Framework using Alternating Direction
  Method of Multipliers
A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers
Tianyun Zhang
Shaokai Ye
Kaiqi Zhang
Jian Tang
Wujie Wen
M. Fardad
Yanzhi Wang
69
438
0
10 Apr 2018
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Yihui He
Ji Lin
Zhijian Liu
Hanrui Wang
Li Li
Song Han
109
1,349
0
10 Feb 2018
Learning Efficient Convolutional Networks through Network Slimming
Learning Efficient Convolutional Networks through Network Slimming
Zhuang Liu
Jianguo Li
Zhiqiang Shen
Gao Huang
Shoumeng Yan
Changshui Zhang
133
2,426
0
22 Aug 2017
ThiNet: A Filter Level Pruning Method for Deep Neural Network
  Compression
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
Jian-Hao Luo
Jianxin Wu
Weiyao Lin
58
1,761
0
20 Jul 2017
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
263
8,864
0
01 Oct 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
316
6,709
0
08 Jun 2015
1