ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.11556
  4. Cited By
Reducing Transformer Depth on Demand with Structured Dropout

Reducing Transformer Depth on Demand with Structured Dropout

25 September 2019
Angela Fan
Edouard Grave
Armand Joulin
ArXiv (abs)PDFHTML

Papers citing "Reducing Transformer Depth on Demand with Structured Dropout"

50 / 406 papers shown
Title
EL-Attention: Memory Efficient Lossless Attention for Generation
EL-Attention: Memory Efficient Lossless Attention for Generation
Yu Yan
Jiusheng Chen
Weizhen Qi
Nikhil Bhendawade
Yeyun Gong
Nan Duan
Ruofei Zhang
VLM
68
6
0
11 May 2021
Extract then Distill: Efficient and Effective Task-Agnostic BERT
  Distillation
Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation
Cheng Chen
Yichun Yin
Lifeng Shang
Zhi Wang
Xin Jiang
Xiao Chen
Qun Liu
FedML
78
7
0
24 Apr 2021
Differentiable Model Compression via Pseudo Quantization Noise
Differentiable Model Compression via Pseudo Quantization Noise
Alexandre Défossez
Yossi Adi
Gabriel Synnaeve
DiffMMQ
92
50
0
20 Apr 2021
Consistent Accelerated Inference via Confident Adaptive Transformers
Consistent Accelerated Inference via Confident Adaptive Transformers
Tal Schuster
Adam Fisch
Tommi Jaakkola
Regina Barzilay
AI4TS
255
73
0
18 Apr 2021
Rethinking Network Pruning -- under the Pre-train and Fine-tune Paradigm
Rethinking Network Pruning -- under the Pre-train and Fine-tune Paradigm
Dongkuan Xu
Ian En-Hsu Yen
Jinxi Zhao
Zhibin Xiao
VLMAAML
95
58
0
18 Apr 2021
UniDrop: A Simple yet Effective Technique to Improve Transformer without
  Extra Cost
UniDrop: A Simple yet Effective Technique to Improve Transformer without Extra Cost
Zhen Wu
Lijun Wu
Qi Meng
Yingce Xia
Shufang Xie
Tao Qin
Xinyu Dai
Tie-Yan Liu
88
22
0
11 Apr 2021
Not All Attention Is All You Need
Not All Attention Is All You Need
Hongqiu Wu
Hai Zhao
Min Zhang
71
9
0
10 Apr 2021
ODE Transformer: An Ordinary Differential Equation-Inspired Model for
  Neural Machine Translation
ODE Transformer: An Ordinary Differential Equation-Inspired Model for Neural Machine Translation
Bei Li
Quan Du
Tao Zhou
Shuhan Zhou
Xin Zeng
Tong Xiao
Jingbo Zhu
63
23
0
06 Apr 2021
Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy
  For Latency
Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency
Yangyang Shi
Varun K. Nagaraja
Chunyang Wu
Jay Mahadeokar
Duc Le
...
Ching-Feng Yeh
Julian Chan
Christian Fuegen
Ozlem Kalinli
M. Seltzer
55
15
0
05 Apr 2021
Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
  Pre-Training
Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training
Wei-Ning Hsu
Anuroop Sriram
Alexei Baevski
Tatiana Likhomanenko
Qiantong Xu
...
Jacob Kahn
Ann Lee
R. Collobert
Gabriel Synnaeve
Michael Auli
SSL
93
241
0
02 Apr 2021
Going deeper with Image Transformers
Going deeper with Image Transformers
Hugo Touvron
Matthieu Cord
Alexandre Sablayrolles
Gabriel Synnaeve
Hervé Jégou
ViT
196
1,025
0
31 Mar 2021
Finetuning Pretrained Transformers into RNNs
Finetuning Pretrained Transformers into RNNs
Jungo Kasai
Hao Peng
Yizhe Zhang
Dani Yogatama
Gabriel Ilharco
Nikolaos Pappas
Yi Mao
Weizhu Chen
Noah A. Smith
112
67
0
24 Mar 2021
The NLP Cookbook: Modern Recipes for Transformer based Deep Learning
  Architectures
The NLP Cookbook: Modern Recipes for Transformer based Deep Learning Architectures
Sushant Singh
A. Mahmood
AI4TS
111
95
0
23 Mar 2021
IOT: Instance-wise Layer Reordering for Transformer Structures
IOT: Instance-wise Layer Reordering for Transformer Structures
Jinhua Zhu
Lijun Wu
Yingce Xia
Shufang Xie
Tao Qin
Wen-gang Zhou
Houqiang Li
Tie-Yan Liu
84
7
0
05 Mar 2021
Memory-efficient Speech Recognition on Smart Devices
Memory-efficient Speech Recognition on Smart Devices
Ganesh Venkatesh
Alagappan Valliappan
Jay Mahadeokar
Shangguan Yuan
Christian Fuegen
M. Seltzer
Vikas Chandra
66
11
0
23 Feb 2021
End-to-End Neural Systems for Automatic Children Speech Recognition: An
  Empirical Study
End-to-End Neural Systems for Automatic Children Speech Recognition: An Empirical Study
Prashanth Gurunath Shivakumar
Shrikanth Narayanan
53
54
0
19 Feb 2021
Learning Dynamic BERT via Trainable Gate Variables and a Bi-modal
  Regularizer
Learning Dynamic BERT via Trainable Gate Variables and a Bi-modal Regularizer
Seohyeong Jeong
Nojun Kwak
20
0
0
19 Feb 2021
TransReID: Transformer-based Object Re-Identification
TransReID: Transformer-based Object Re-Identification
Shuting He
Haowen Luo
Pichao Wang
F. Wang
Hao Li
Wei Jiang
ViT
288
826
0
08 Feb 2021
AutoFreeze: Automatically Freezing Model Blocks to Accelerate
  Fine-tuning
AutoFreeze: Automatically Freezing Model Blocks to Accelerate Fine-tuning
Yuhan Liu
Saurabh Agarwal
Shivaram Venkataraman
OffRL
80
56
0
02 Feb 2021
BENDR: using transformers and a contrastive self-supervised learning
  task to learn from massive amounts of EEG data
BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data
Demetres Kostas
Stephane Aroca-Ouellette
Frank Rudzicz
SSL
120
210
0
28 Jan 2021
Model Compression for Domain Adaptation through Causal Effect Estimation
Model Compression for Domain Adaptation through Causal Effect Estimation
Guy Rotman
Amir Feder
Roi Reichart
CML
92
7
0
18 Jan 2021
KDLSQ-BERT: A Quantized Bert Combining Knowledge Distillation with
  Learned Step Size Quantization
KDLSQ-BERT: A Quantized Bert Combining Knowledge Distillation with Learned Step Size Quantization
Jing Jin
Cai Liang
Tiancheng Wu
Li Zou
Zhiliang Gan
MQ
59
27
0
15 Jan 2021
I-BERT: Integer-only BERT Quantization
I-BERT: Integer-only BERT Quantization
Sehoon Kim
A. Gholami
Z. Yao
Michael W. Mahoney
Kurt Keutzer
MQ
179
354
0
05 Jan 2021
An Efficient Transformer Decoder with Compressed Sub-layers
An Efficient Transformer Decoder with Compressed Sub-layers
Yanyang Li
Ye Lin
Tong Xiao
Jingbo Zhu
88
30
0
03 Jan 2021
Subformer: Exploring Weight Sharing for Parameter Efficiency in
  Generative Transformers
Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
Machel Reid
Edison Marrese-Taylor
Y. Matsuo
MoE
108
48
0
01 Jan 2021
EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Xiaohan Chen
Yu Cheng
Shuohang Wang
Zhe Gan
Zhangyang Wang
Jingjing Liu
131
100
0
31 Dec 2020
BinaryBERT: Pushing the Limit of BERT Quantization
BinaryBERT: Pushing the Limit of BERT Quantization
Haoli Bai
Wei Zhang
Lu Hou
Lifeng Shang
Jing Jin
Xin Jiang
Qun Liu
Michael Lyu
Irwin King
MQ
230
227
0
31 Dec 2020
Reservoir Transformers
Reservoir Transformers
Sheng Shen
Alexei Baevski
Ari S. Morcos
Kurt Keutzer
Michael Auli
Douwe Kiela
88
18
0
30 Dec 2020
CascadeBERT: Accelerating Inference of Pre-trained Language Models via
  Calibrated Complete Models Cascade
CascadeBERT: Accelerating Inference of Pre-trained Language Models via Calibrated Complete Models Cascade
Lei Li
Yankai Lin
Deli Chen
Shuhuai Ren
Peng Li
Jie Zhou
Xu Sun
115
52
0
29 Dec 2020
Learning Light-Weight Translation Models from Deep Transformer
Learning Light-Weight Translation Models from Deep Transformer
Bei Li
Ziyang Wang
Hui Liu
Quan Du
Tong Xiao
Chunliang Zhang
Jingbo Zhu
VLM
171
40
0
27 Dec 2020
Training data-efficient image transformers & distillation through
  attention
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
402
6,848
0
23 Dec 2020
A Survey on Visual Transformer
A Survey on Visual Transformer
Kai Han
Yunhe Wang
Hanting Chen
Xinghao Chen
Jianyuan Guo
...
Chunjing Xu
Yixing Xu
Zhaohui Yang
Yiman Zhang
Dacheng Tao
ViT
233
2,278
0
23 Dec 2020
Trex: Learning Execution Semantics from Micro-Traces for Binary
  Similarity
Trex: Learning Execution Semantics from Micro-Traces for Binary Similarity
Kexin Pei
Zhou Xuan
Junfeng Yang
Suman Jana
Baishakhi Ray
106
90
0
16 Dec 2020
Improving Task-Agnostic BERT Distillation with Layer Mapping Search
Improving Task-Agnostic BERT Distillation with Layer Mapping Search
Xiaoqi Jiao
Huating Chang
Yichun Yin
Lifeng Shang
Xin Jiang
Xiao Chen
Linlin Li
Fang Wang
Qun Liu
49
12
0
11 Dec 2020
MLS: A Large-Scale Multilingual Dataset for Speech Research
MLS: A Large-Scale Multilingual Dataset for Speech Research
Vineel Pratap
Qiantong Xu
Anuroop Sriram
Gabriel Synnaeve
R. Collobert
AuLLM
186
513
0
07 Dec 2020
Language Models not just for Pre-training: Fast Online Neural Noisy
  Channel Modeling
Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling
Shruti Bhosale
Kyra Yee
Sergey Edunov
Michael Auli
85
7
0
13 Nov 2020
Multilingual AMR-to-Text Generation
Multilingual AMR-to-Text Generation
Angela Fan
Claire Gardent
51
33
0
10 Nov 2020
Don't Read Too Much into It: Adaptive Computation for Open-Domain
  Question Answering
Don't Read Too Much into It: Adaptive Computation for Open-Domain Question Answering
Yuxiang Wu
Sebastian Riedel
Pasquale Minervini
Pontus Stenetorp
55
8
0
10 Nov 2020
Stochastic Attention Head Removal: A simple and effective method for
  improving Transformer Based ASR Models
Stochastic Attention Head Removal: A simple and effective method for improving Transformer Based ASR Models
Shucong Zhang
Erfan Loweimi
P. Bell
Steve Renals
26
0
0
08 Nov 2020
Rethinking the Value of Transformer Components
Rethinking the Value of Transformer Components
Wenxuan Wang
Zhaopeng Tu
84
40
0
07 Nov 2020
Know What You Don't Need: Single-Shot Meta-Pruning for Attention Heads
Know What You Don't Need: Single-Shot Meta-Pruning for Attention Heads
Zhengyan Zhang
Fanchao Qi
Zhiyuan Liu
Qun Liu
Maosong Sun
VLM
91
31
0
07 Nov 2020
Optimizing Transformer for Low-Resource Neural Machine Translation
Optimizing Transformer for Low-Resource Neural Machine Translation
Ali Araabi
Christof Monz
VLM
86
78
0
04 Nov 2020
Joint Masked CPC and CTC Training for ASR
Joint Masked CPC and CTC Training for ASR
Chaitanya Talnikar
Tatiana Likhomanenko
R. Collobert
Gabriel Synnaeve
SSL
110
27
0
30 Oct 2020
Accelerating Training of Transformer-Based Language Models with
  Progressive Layer Dropping
Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping
Minjia Zhang
Yuxiong He
AI4CE
48
104
0
26 Oct 2020
Pre-trained Summarization Distillation
Pre-trained Summarization Distillation
Sam Shleifer
Alexander M. Rush
62
103
0
24 Oct 2020
AdapterDrop: On the Efficiency of Adapters in Transformers
AdapterDrop: On the Efficiency of Adapters in Transformers
Andreas Rucklé
Gregor Geigle
Max Glockner
Tilman Beck
Jonas Pfeiffer
Nils Reimers
Iryna Gurevych
125
267
0
22 Oct 2020
Rethinking Evaluation in ASR: Are Our Models Robust Enough?
Rethinking Evaluation in ASR: Are Our Models Robust Enough?
Tatiana Likhomanenko
Qiantong Xu
Vineel Pratap
Paden Tomasello
Jacob Kahn
Gilad Avidov
R. Collobert
Gabriel Synnaeve
147
99
0
22 Oct 2020
SlimIPL: Language-Model-Free Iterative Pseudo-Labeling
SlimIPL: Language-Model-Free Iterative Pseudo-Labeling
Tatiana Likhomanenko
Qiantong Xu
Jacob Kahn
Gabriel Synnaeve
R. Collobert
VLM
136
65
0
22 Oct 2020
Beyond English-Centric Multilingual Machine Translation
Beyond English-Centric Multilingual Machine Translation
Angela Fan
Shruti Bhosale
Holger Schwenk
Zhiyi Ma
Ahmed El-Kishky
...
Vitaliy Liptchinsky
Sergey Edunov
Edouard Grave
Michael Auli
Armand Joulin
LRM
100
863
0
21 Oct 2020
Training Flexible Depth Model by Multi-Task Learning for Neural Machine
  Translation
Training Flexible Depth Model by Multi-Task Learning for Neural Machine Translation
Qiang Wang
Tong Xiao
Jingbo Zhu
42
2
0
16 Oct 2020
Previous
123456789
Next