ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1603.05279
  4. Cited By
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural
  Networks
v1v2v3v4 (latest)

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks

16 March 2016
Mohammad Rastegari
Vicente Ordonez
Joseph Redmon
Ali Farhadi
    MQ
ArXiv (abs)PDFHTML

Papers citing "XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks"

50 / 1,765 papers shown
Title
Extreme Model Compression with Structured Sparsity at Low Precision
Extreme Model Compression with Structured Sparsity at Low Precision
Dan Liu
Nikita Dvornik
Xue Liu
MQ
16
0
0
11 Nov 2025
Sensor Calibration Model Balancing Accuracy, Real-time, and Efficiency
Sensor Calibration Model Balancing Accuracy, Real-time, and Efficiency
Jinyong Yun
Hyungjin Kim
Seokho Ahn
Euijong Lee
Young-Duk Seo
0
0
0
10 Nov 2025
A Survey on Deep Text Hashing: Efficient Semantic Text Retrieval with Binary Representation
A Survey on Deep Text Hashing: Efficient Semantic Text Retrieval with Binary Representation
Liyang He
Zhenya Huang
Cheng Yang
Rui Li
Zheng Zhang
Kai Zhang
Zhi Li
Qi Liu
Enhong Chen
3DV
128
0
0
31 Oct 2025
LoRAQuant: Mixed-Precision Quantization of LoRA to Ultra-Low Bits
LoRAQuant: Mixed-Precision Quantization of LoRA to Ultra-Low Bits
Amir Reza Mirzaei
Yuqiao Wen
Yanshuai Cao
Lili Mou
MQ
215
0
0
30 Oct 2025
Efficient Cost-and-Quality Controllable Arbitrary-scale Super-resolution with Fourier Constraints
Efficient Cost-and-Quality Controllable Arbitrary-scale Super-resolution with Fourier Constraints
Kazutoshi Akita
Norimichi Ukita
SupR
132
0
0
28 Oct 2025
TernaryCLIP: Efficiently Compressing Vision-Language Models with Ternary Weights and Distilled Knowledge
TernaryCLIP: Efficiently Compressing Vision-Language Models with Ternary Weights and Distilled Knowledge
Shu-Hao Zhang
Wei Tang
Chen Wu
Peng Hu
Nan Li
L. Zhang
Qi Zhang
Shao-Qun Zhang
MQVLM
134
0
0
23 Oct 2025
Differentiable, Bit-shifting, and Scalable Quantization without training neural network from scratch
Differentiable, Bit-shifting, and Scalable Quantization without training neural network from scratch
Zia Badar
MQ
66
0
0
18 Oct 2025
MC#: Mixture Compressor for Mixture-of-Experts Large Models
MC#: Mixture Compressor for Mixture-of-Experts Large Models
Wei Huang
Yue Liao
Yukang Chen
Jianhui Liu
Haoru Tan
Si Liu
Shiming Zhang
Shuicheng Yan
Xiaojuan Qi
MoEMQ
120
0
0
13 Oct 2025
High-Dimensional Learning Dynamics of Quantized Models with Straight-Through Estimator
High-Dimensional Learning Dynamics of Quantized Models with Straight-Through Estimator
Yuma Ichikawa
Shuhei Kashiwamura
Ayaka Sakata
MQ
92
0
0
12 Oct 2025
Receptive Field Expanded Look-Up Tables for Vision Inference: Advancing from Low-level to High-level Tasks
Receptive Field Expanded Look-Up Tables for Vision Inference: Advancing from Low-level to High-level Tasks
Xi Zhang
Xiaolin Wu
20
1
0
12 Oct 2025
SQS: Bayesian DNN Compression through Sparse Quantized Sub-distributions
SQS: Bayesian DNN Compression through Sparse Quantized Sub-distributions
Ziyi Wang
Nan Jiang
Guang Lin
Qifan Song
MQ
112
0
0
10 Oct 2025
Vanishing Contributions: A Unified Approach to Smoothly Transition Neural Models into Compressed Form
Vanishing Contributions: A Unified Approach to Smoothly Transition Neural Models into Compressed Form
Lorenzo Nikiforos
Charalampos Antoniadis
Luciano Prono
F. Pareschi
R. Rovatti
Gianluca Setti
36
0
0
09 Oct 2025
PT$^2$-LLM: Post-Training Ternarization for Large Language Models
PT2^22-LLM: Post-Training Ternarization for Large Language Models
Xianglong Yan
Chengzhu Bao
Zhiteng Li
Tianao Zhang
Kaicheng Yang
Haotong Qin
Ruobing Xie
Xingwu Sun
Yulun Zhang
MQ
102
0
0
27 Sep 2025
HTMA-Net: Towards Multiplication-Avoiding Neural Networks via Hadamard Transform and In-Memory Computing
HTMA-Net: Towards Multiplication-Avoiding Neural Networks via Hadamard Transform and In-Memory Computing
Emadeldeen Hamdan
Ahmet Enis Cetin
44
0
0
27 Sep 2025
Spatial-Spectral Binarized Neural Network for Panchromatic and Multi-spectral Images Fusion
Spatial-Spectral Binarized Neural Network for Panchromatic and Multi-spectral Images Fusion
Yizhen Jiang
Mengting Ma
Anqi Zhu
Xiaowen Ma
Jiaxin Li
Wei Zhang
MQ
19
0
0
27 Sep 2025
Punching Above Precision: Small Quantized Model Distillation with Learnable Regularizer
Punching Above Precision: Small Quantized Model Distillation with Learnable Regularizer
Abdur Rehman
S. Sharif
Md Abdur Rahaman
M. J. Aashik Rasool
Seongwan Kim
J. Lee
MQ
40
0
0
25 Sep 2025
Binary Autoencoder for Mechanistic Interpretability of Large Language Models
Binary Autoencoder for Mechanistic Interpretability of Large Language Models
Hakaze Cho
Haolin Yang
Brian M. Kurkoski
Naoya Inoue
MQ
56
0
0
25 Sep 2025
Bi-VLM: Pushing Ultra-Low Precision Post-Training Quantization Boundaries in Vision-Language Models
Bi-VLM: Pushing Ultra-Low Precision Post-Training Quantization Boundaries in Vision-Language Models
Xijun Wang
Junyun Huang
Rayyan Abdalla
Chengyuan Zhang
Ruiqi Xian
Wanrong Zhu
MQVLM
75
0
0
23 Sep 2025
Deep Lookup Network
Deep Lookup NetworkIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2025
Yulan Guo
Longguang Wang
Wendong Mao
Xiaoyu Dong
Yingqian Wang
Li Liu
W. An
48
0
0
17 Sep 2025
Breaking the Conventional Forward-Backward Tie in Neural Networks: Activation Functions
Breaking the Conventional Forward-Backward Tie in Neural Networks: Activation Functions
Luigi Troiano
Francesco Gissi
Vincenzo Benedetto
Genny Tortora
28
0
0
08 Sep 2025
1 bit is all we need: binary normalized neural networks
1 bit is all we need: binary normalized neural networks
Eduardo Lobo Lustoda Cabral
Paulo Pirozelli
Larissa Driemeier
MQ
92
0
0
07 Sep 2025
Data-Augmented Quantization-Aware Knowledge Distillation
Data-Augmented Quantization-Aware Knowledge Distillation
Justin Kur
Kaiqi Zhao
MQ
84
0
0
04 Sep 2025
Binary Quantization For LLMs Through Dynamic Grouping
Binary Quantization For LLMs Through Dynamic Grouping
Xinzhe Zheng
Zhen-Qun Yang
H. Xie
S. J. Qin
Arlene Chen
Fangzhen Lin
MQ
48
0
0
03 Sep 2025
Progressive Element-wise Gradient Estimation for Neural Network Quantization
Progressive Element-wise Gradient Estimation for Neural Network Quantization
Kaiqi Zhao
MQ
17
0
0
27 Aug 2025
Quantized Neural Networks for Microcontrollers: A Comprehensive Review of Methods, Platforms, and Applications
Quantized Neural Networks for Microcontrollers: A Comprehensive Review of Methods, Platforms, and Applications
Hamza A. Abushahla
Dara Varam
Ariel J. N. Panopio
Mohamed I. AlHajri
MQ
215
0
0
20 Aug 2025
A Self-Ensemble Inspired Approach for Effective Training of Binary-Weight Spiking Neural Networks
A Self-Ensemble Inspired Approach for Effective Training of Binary-Weight Spiking Neural Networks
Qingyan Meng
Mingqing Xiao
Zhengyu Ma
Huihui Zhou
Yonghong Tian
Zhouchen Lin
MQ
68
0
0
18 Aug 2025
Rethinking 1-bit Optimization Leveraging Pre-trained Large Language Models
Rethinking 1-bit Optimization Leveraging Pre-trained Large Language Models
Zhijun Tu
Hanting Chen
Siqi Liu
Chuanjian Liu
Jian Li
Jie Hu
Yunhe Wang
MQ
40
0
0
09 Aug 2025
Task complexity shapes internal representations and robustness in neural networks
Task complexity shapes internal representations and robustness in neural networks
Robert Jankowski
F. Radicchi
M. Á. Serrano
Marián Boguná
S. Fortunato
AAML
61
1
0
07 Aug 2025
iFairy: the First 2-bit Complex LLM with All Parameters in $\{\pm1, \pm i\}$
iFairy: the First 2-bit Complex LLM with All Parameters in {±1,±i}\{\pm1, \pm i\}{±1,±i}
Feiyu Wang
Guoan Wang
Yihao Zhang
S. Wang
Weitao Li
Bokai Huang
Shimao Chen
Z. L. Jiang
Rui Xu
Tong Yang
MQ
106
0
0
07 Aug 2025
An Architecture for Spatial Networking
An Architecture for Spatial Networking
Josh Millar
Ryan Gibb
Roy Ang
Hamed Haddadi
Hamed Haddadi
96
0
0
30 Jul 2025
Compression Aware Certified Training
Compression Aware Certified Training
Changming Xu
Gagandeep Singh
106
0
0
13 Jun 2025
BitTTS: Highly Compact Text-to-Speech Using 1.58-bit Quantization and Weight Indexing
BitTTS: Highly Compact Text-to-Speech Using 1.58-bit Quantization and Weight Indexing
Masaya Kawamura
Takuya Hasumi
Yuma Shirahata
Ryuichi Yamamoto
MQ
114
0
0
04 Jun 2025
Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering
Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering
Yankai Chen
Yue Que
Xinni Zhang
Chen Ma
Irwin King
167
1
0
03 Jun 2025
LittleBit: Ultra Low-Bit Quantization via Latent Factorization
LittleBit: Ultra Low-Bit Quantization via Latent Factorization
Banseok Lee
Dongkyu Kim
Youngcheon You
Youngmin Kim
MQ
108
1
0
30 May 2025
Learning Interpretable Differentiable Logic Networks for Tabular Regression
Learning Interpretable Differentiable Logic Networks for Tabular Regression
C. Yue
N. Jha
247
0
0
29 May 2025
Highly Efficient and Effective LLMs with Multi-Boolean Architectures
Highly Efficient and Effective LLMs with Multi-Boolean Architectures
Ba-Hien Tran
Van Minh Nguyen
MQ
192
0
0
28 May 2025
Compressing Sine-Activated Low-Rank Adapters through Post-Training Quantization
Compressing Sine-Activated Low-Rank Adapters through Post-Training Quantization
Cameron Gordon
Yiping Ji
Hemanth Saratchandran
Paul Albert
Simon Lucey
MQ
143
0
0
28 May 2025
BTC-LLM: Efficient Sub-1-Bit LLM Quantization via Learnable Transformation and Binary Codebook
BTC-LLM: Efficient Sub-1-Bit LLM Quantization via Learnable Transformation and Binary Codebook
Hao Gu
Lujun Li
Zheyu Wang
B. Liu
Qiyuan Zhu
Sirui Han
Wenhan Luo
MQ
81
1
0
24 May 2025
Spiking Neural Networks Need High Frequency Information
Spiking Neural Networks Need High Frequency Information
Yuetong Fang
Deming Zhou
Ziqing Wang
Hongwei Ren
Zecui Zeng
Lusong Li
Shibo Zhou
Zhanchen Zhu
143
0
0
24 May 2025
A Principled Bayesian Framework for Training Binary and Spiking Neural Networks
James A. Walker
M. Khajehnejad
Adeel Razi
BDL
194
0
0
23 May 2025
Beyond Discreteness: Finite-Sample Analysis of Straight-Through Estimator for Quantization
Halyun Jeong
Jack Xin
Penghang Yin
MQ
119
0
0
23 May 2025
Automatic Complementary Separation Pruning Toward Lightweight CNNs
Automatic Complementary Separation Pruning Toward Lightweight CNNs
David Levin
Gonen Singer
154
0
0
19 May 2025
An Overview of Arithmetic Adaptations for Inference of Convolutional Neural Networks on Re-configurable Hardware
An Overview of Arithmetic Adaptations for Inference of Convolutional Neural Networks on Re-configurable Hardware
Ilkay Wunderlich
Benjamin Koch
Sven Schönfeld
302
2
0
19 May 2025
Addition is almost all you need: Compressing neural networks with double binary factorization
Addition is almost all you need: Compressing neural networks with double binary factorization
Vladimír Boža
Vladimír Macko
MQ
296
0
0
16 May 2025
PROM: Prioritize Reduction of Multiplications Over Lower Bit-Widths for Efficient CNNs
PROM: Prioritize Reduction of Multiplications Over Lower Bit-Widths for Efficient CNNs
Lukas Meiner
Jens Mehnert
Alexandru Paul Condurache
MQ
241
1
0
06 May 2025
Efficient Continual Learning in Keyword Spotting using Binary Neural Networks
Efficient Continual Learning in Keyword Spotting using Binary Neural NetworksSensors Applications Symposium (SAS), 2025
Quynh Nguyen Phuong Vu
Luciano S. Martinez-Rau
Yuxuan Zhang
Nho-Duc Tran
Bengt Oelmann
Michele Magno
Sebastian Bader
CLL
118
0
0
05 May 2025
FPGA-based Acceleration for Convolutional Neural Networks: A Comprehensive Review
FPGA-based Acceleration for Convolutional Neural Networks: A Comprehensive Review
Junye Jiang
Yaan Zhou
Yuanhao Gong
Haoxuan Yuan
Shuanglong Liu
159
4
0
04 May 2025
Practical Boolean Backpropagation
Practical Boolean Backpropagation
Simon Golbert
49
0
0
01 May 2025
Optimizing Deep Neural Networks using Safety-Guided Self Compression
Optimizing Deep Neural Networks using Safety-Guided Self Compression
Mohammad Zbeeb
Mariam Salman
Mohammad Bazzi
Ammar Mohanna
121
0
0
01 May 2025
Silenzio: Secure Non-Interactive Outsourced MLP Training
Silenzio: Secure Non-Interactive Outsourced MLP Training
Jonas Sander
T. Eisenbarth
120
0
0
24 Apr 2025
1234...343536
Next