18

Understanding Transformer Encoder-Decoder Representations through Bernoulli Dropout

Xuanzhou Chen
Main:8 Pages
5 Figures
Bibliography:1 Pages
2 Tables
Appendix:2 Pages
Abstract

We study Transformer overparameterization through the lens of angular similarity in high-dimensional encoder-decoder embeddings. We apply Bernoulli dropout between the encoder and the decoder, varying the keep probability pp to identify a sparsity-dependent threshold above which the Top-1 prediction is preserved. Theoretically, we prove that, if the effective sparsity embeddings is sufficiently large, and thus decoder performance, remain stable under moderate coordinate dropout. Empirically, we implement the Bernoulli dropout by constructing a new Transformer model augmented with Binary Erasure Channel (BEC) and test its performance on an English-French translation task. Experimental results visualize the trends for validation accuracies and BLEU scores, both decline sharply at some threshold.

View on arXiv
Comments on this paper