ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.14187
  4. Cited By
HAT: Hardware-Aware Transformers for Efficient Natural Language
  Processing

HAT: Hardware-Aware Transformers for Efficient Natural Language Processing

28 May 2020
Hanrui Wang
Zhanghao Wu
Zhijian Liu
Han Cai
Ligeng Zhu
Chuang Gan
Song Han
ArXivPDFHTML

Papers citing "HAT: Hardware-Aware Transformers for Efficient Natural Language Processing"

21 / 71 papers shown
Title
QuantumNAS: Noise-Adaptive Search for Robust Quantum Circuits
QuantumNAS: Noise-Adaptive Search for Robust Quantum Circuits
Hanrui Wang
Yongshan Ding
Jiaqi Gu
Zirui Li
Yujun Lin
David Z. Pan
Frederic T. Chong
Song Han
33
170
0
22 Jul 2021
AutoFormer: Searching Transformers for Visual Recognition
AutoFormer: Searching Transformers for Visual Recognition
Minghao Chen
Houwen Peng
Jianlong Fu
Haibin Ling
ViT
41
259
0
01 Jul 2021
HELP: Hardware-Adaptive Efficient Latency Prediction for NAS via
  Meta-Learning
HELP: Hardware-Adaptive Efficient Latency Prediction for NAS via Meta-Learning
Hayeon Lee
Sewoong Lee
Song Chong
Sung Ju Hwang
21
26
0
16 Jun 2021
FEAR: A Simple Lightweight Method to Rank Architectures
FEAR: A Simple Lightweight Method to Rank Architectures
Debadeepta Dey
Shital C. Shah
Sébastien Bubeck
OOD
30
4
0
07 Jun 2021
Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model
Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model
Jiangning Zhang
Chao Xu
Jian Li
Wenzhou Chen
Yabiao Wang
Ying Tai
Shuo Chen
Chengjie Wang
Feiyue Huang
Yong Liu
40
22
0
31 May 2021
Memory-Efficient Differentiable Transformer Architecture Search
Memory-Efficient Differentiable Transformer Architecture Search
Yuekai Zhao
Li Dong
Yelong Shen
Zhihua Zhang
Furu Wei
Weizhu Chen
ViT
32
17
0
31 May 2021
Dynamic Multi-Branch Layers for On-Device Neural Machine Translation
Dynamic Multi-Branch Layers for On-Device Neural Machine Translation
Zhixing Tan
Zeyuan Yang
Meng Zhang
Qun Liu
Maosong Sun
Yang Liu
AI4CE
22
4
0
14 May 2021
Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling
  on Heterogeneous Embedded Platforms
Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms
W. Lou
Lei Xun
Amin Sabet
Jia Bi
Jonathon S. Hare
G. Merrett
AI4CE
28
29
0
08 May 2021
Translational NLP: A New Paradigm and General Principles for Natural
  Language Processing Research
Translational NLP: A New Paradigm and General Principles for Natural Language Processing Research
Denis R. Newman-Griffis
J. Lehman
Carolyn Rose
H. Hochheiser
32
20
0
16 Apr 2021
Enabling Design Methodologies and Future Trends for Edge AI:
  Specialization and Co-design
Enabling Design Methodologies and Future Trends for Edge AI: Specialization and Co-design
Cong Hao
Jordan Dotzel
Jinjun Xiong
Luca Benini
Zhiru Zhang
Deming Chen
58
34
0
25 Mar 2021
Scalable Vision Transformers with Hierarchical Pooling
Scalable Vision Transformers with Hierarchical Pooling
Zizheng Pan
Bohan Zhuang
Jing Liu
Haoyu He
Jianfei Cai
ViT
27
126
0
19 Mar 2021
AlphaNet: Improved Training of Supernets with Alpha-Divergence
AlphaNet: Improved Training of Supernets with Alpha-Divergence
Dilin Wang
Chengyue Gong
Meng Li
Qiang Liu
Vikas Chandra
155
44
0
16 Feb 2021
Dancing along Battery: Enabling Transformer with Run-time
  Reconfigurability on Mobile Devices
Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
Yuhong Song
Weiwen Jiang
Bingbing Li
Panjie Qi
Qingfeng Zhuge
E. Sha
Sakyasingha Dasgupta
Yiyu Shi
Caiwen Ding
18
18
0
12 Feb 2021
A Comprehensive Survey on Hardware-Aware Neural Architecture Search
A Comprehensive Survey on Hardware-Aware Neural Architecture Search
Hadjer Benmeziane
Kaoutar El Maghraoui
Hamza Ouarnoughi
Smail Niar
Martin Wistuba
Naigang Wang
34
96
0
22 Jan 2021
Transformers in Vision: A Survey
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
Fahad Shahbaz Khan
M. Shah
ViT
227
2,434
0
04 Jan 2021
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and
  Head Pruning
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning
Hanrui Wang
Zhekai Zhang
Song Han
43
380
0
17 Dec 2020
Efficient Transformers: A Survey
Efficient Transformers: A Survey
Yi Tay
Mostafa Dehghani
Dara Bahri
Donald Metzler
VLM
114
1,103
0
14 Sep 2020
Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution
Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution
Haotian Tang
Zhijian Liu
Shengyu Zhao
Yujun Lin
Ji Lin
Hanrui Wang
Song Han
3DPC
42
632
0
31 Jul 2020
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy
  Efficient Inference
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference
Ali Hadi Zadeh
Isak Edo
Omar Mohamed Awad
Andreas Moshovos
MQ
30
185
0
08 May 2020
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
274
5,330
0
05 Nov 2016
Convolutional Neural Networks for Sentence Classification
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
291
13,373
0
25 Aug 2014
Previous
12