ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.09571
  4. Cited By
Bit-wise Training of Neural Network Weights

Bit-wise Training of Neural Network Weights

19 February 2022
Cristian Ivan
    MQ
ArXivPDFHTML

Papers citing "Bit-wise Training of Neural Network Weights"

24 / 24 papers shown
Title
EvilModel: Hiding Malware Inside of Neural Network Models
EvilModel: Hiding Malware Inside of Neural Network Models
Zhi Wang
Chaoge Liu
Xiang Cui
40
31
0
19 Jul 2021
High-Capacity Expert Binary Networks
High-Capacity Expert Binary Networks
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
MQ
60
59
0
07 Oct 2020
Training highly effective connectivities within neural networks with
  randomly initialized, fixed weights
Training highly effective connectivities within neural networks with randomly initialized, fixed weights
Cristian Ivan
Razvan V. Florian
29
4
0
30 Jun 2020
Logarithmic Pruning is All You Need
Logarithmic Pruning is All You Need
Laurent Orseau
Marcus Hutter
Omar Rivasplata
49
88
0
22 Jun 2020
Reintroducing Straight-Through Estimators as Principled Methods for
  Stochastic Binary Networks
Reintroducing Straight-Through Estimators as Principled Methods for Stochastic Binary Networks
Alexander Shekhovtsov
Dmitry Molchanov
MQ
38
16
0
11 Jun 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
96
274
0
03 Feb 2020
What's Hidden in a Randomly Weighted Neural Network?
What's Hidden in a Randomly Weighted Neural Network?
Vivek Ramanujan
Mitchell Wortsman
Aniruddha Kembhavi
Ali Farhadi
Mohammad Rastegari
66
356
0
29 Nov 2019
Back to Simplicity: How to Train Accurate BNNs from Scratch?
Back to Simplicity: How to Train Accurate BNNs from Scratch?
Joseph Bethge
Haojin Yang
Marvin Bornstein
Christoph Meinel
AAML
MQ
52
58
0
19 Jun 2019
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
52
386
0
03 May 2019
Improved training of binary networks for human pose estimation and image
  recognition
Improved training of binary networks for human pose estimation and image recognition
Adrian Bulat
Georgios Tzimiropoulos
Jean Kossaifi
Maja Pantic
MQ
50
47
0
11 Apr 2019
Understanding Straight-Through Estimator in Training Activation
  Quantized Neural Nets
Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
Penghang Yin
J. Lyu
Shuai Zhang
Stanley Osher
Y. Qi
Jack Xin
MQ
LLMSV
94
314
0
13 Mar 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
69
312
0
15 Feb 2019
SNIP: Single-shot Network Pruning based on Connection Sensitivity
SNIP: Single-shot Network Pruning based on Connection Sensitivity
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
247
1,198
0
04 Oct 2018
Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved
  Representational Capability and Advanced Training Algorithm
Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm
Zechun Liu
Baoyuan Wu
Wenhan Luo
Xin Yang
Wen Liu
K. Cheng
MQ
81
555
0
01 Aug 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
223
3,461
0
09 Mar 2018
NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm
NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm
Xiaoliang Dai
Hongxu Yin
N. Jha
DD
75
235
0
06 Nov 2017
Trained Ternary Quantization
Trained Ternary Quantization
Chenzhuo Zhu
Song Han
Huizi Mao
W. Dally
MQ
131
1,035
0
04 Dec 2016
Ternary Weight Networks
Ternary Weight Networks
Fengfu Li
Bin Liu
Xiaoxing Wang
Bo Zhang
Junchi Yan
MQ
68
525
0
16 May 2016
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural
  Networks
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
Mohammad Rastegari
Vicente Ordonez
Joseph Redmon
Ali Farhadi
MQ
159
4,353
0
16 Mar 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.1K
193,814
0
10 Dec 2015
BinaryConnect: Training Deep Neural Networks with binary weights during
  propagations
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
Matthieu Courbariaux
Yoshua Bengio
J. David
MQ
204
2,984
0
02 Nov 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
303
18,609
0
06 Feb 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.7K
150,006
0
22 Dec 2014
Estimating or Propagating Gradients Through Stochastic Neurons
Estimating or Propagating Gradients Through Stochastic Neurons
Yoshua Bengio
113
110
0
14 May 2013
1