166
0

MaxPoolBERT: Enhancing BERT Classification via Layer- and Token-Wise Aggregation

Main:8 Pages
4 Figures
Bibliography:2 Pages
6 Tables
Appendix:1 Pages
Abstract

The [CLS] token in BERT is commonly used as a fixed-length representation for classification tasks, yet prior work has shown that both other tokens and intermediate layers encode valuable contextual information. In this work, we propose MaxPoolBERT, a lightweight extension to BERT that refines the [CLS] representation by aggregating information across layers and tokens. Specifically, we explore three modifications: (i) max-pooling the [CLS] token across multiple layers, (ii) enabling the [CLS] token to attend over the entire final layer using an additional multi-head attention (MHA) layer, and (iii) combining max-pooling across the full sequence with MHA. Our approach enhances BERT's classification accuracy (especially on low-resource tasks) without requiring pre-training or significantly increasing model size. Experiments on the GLUE benchmark show that MaxPoolBERT consistently achieves a better performance on the standard BERT-base model.

View on arXiv
@article{behrendt2025_2505.15696,
  title={ MaxPoolBERT: Enhancing BERT Classification via Layer- and Token-Wise Aggregation },
  author={ Maike Behrendt and Stefan Sylvius Wagner and Stefan Harmeling },
  journal={arXiv preprint arXiv:2505.15696},
  year={ 2025 }
}
Comments on this paper