ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.02800
  4. Cited By
Hardware Acceleration of Fully Quantized BERT for Efficient Natural
  Language Processing

Hardware Acceleration of Fully Quantized BERT for Efficient Natural Language Processing

4 March 2021
Zejian Liu
Gang Li
Jian Cheng
    MQ
ArXivPDFHTML

Papers citing "Hardware Acceleration of Fully Quantized BERT for Efficient Natural Language Processing"

3 / 3 papers shown
Title
COBRA: Algorithm-Architecture Co-optimized Binary Transformer Accelerator for Edge Inference
COBRA: Algorithm-Architecture Co-optimized Binary Transformer Accelerator for Edge Inference
Ye Qiao
Zhiheng Cheng
Yian Wang
Yifan Zhang
Yunzhe Deng
Sitao Huang
149
0
0
22 Apr 2025
How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer
  Representations
How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representations
Betty van Aken
B. Winter
Alexander Loser
Felix Alexander Gers
41
153
0
11 Sep 2019
Bit Fusion: Bit-Level Dynamically Composable Architecture for
  Accelerating Deep Neural Networks
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks
Hardik Sharma
Jongse Park
Naveen Suda
Liangzhen Lai
Benson Chau
Joo-Young Kim
Vikas Chandra
H. Esmaeilzadeh
MQ
51
489
0
05 Dec 2017
1