Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.11656
Cited By
FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?
23 May 2022
Shikhar Tuli
Bhishma Dedhia
Shreshth Tuli
N. Jha
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?"
8 / 8 papers shown
Title
EdgeTran: Co-designing Transformers for Efficient Inference on Mobile Edge Platforms
Shikhar Tuli
N. Jha
36
3
0
24 Mar 2023
AccelTran: A Sparsity-Aware Accelerator for Dynamic Inference with Transformers
Shikhar Tuli
N. Jha
33
31
0
28 Feb 2023
CODEBench: A Neural Architecture and Hardware Accelerator Co-Design Framework
Shikhar Tuli
Chia-Hao Li
Ritvik Sharma
N. Jha
36
13
0
07 Dec 2022
Primer: Searching for Efficient Transformers for Language Modeling
David R. So
Wojciech Mañke
Hanxiao Liu
Zihang Dai
Noam M. Shazeer
Quoc V. Le
VLM
91
152
0
17 Sep 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
271
5,327
0
05 Nov 2016
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,925
0
17 Aug 2015
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,145
0
06 Jun 2015
1