83
0

Controlling Summarization Length Through EOS Token Weighting

Abstract

Controlling the length of generated text can be crucial in various text-generation tasks, including summarization. Existing methods often require complex model alterations, limiting compatibility with pre-trained models. We address these limitations by developing a simple approach for controlling the length of automatic text summaries by increasing the importance of correctly predicting the EOS token in the cross-entropy loss computation. The proposed methodology is agnostic to architecture and decoding algorithms and orthogonal to other inference-time techniques to control the generation length. We tested it with encoder-decoder and modern GPT-style LLMs, and show that this method can control generation length, often without affecting the quality of the summary.

View on arXiv
@article{belligoli2025_2506.05017,
  title={ Controlling Summarization Length Through EOS Token Weighting },
  author={ Zeno Belligoli and Emmanouil Stergiadis and Eran Fainman and Ilya Gusev },
  journal={arXiv preprint arXiv:2506.05017},
  year={ 2025 }
}
Comments on this paper