model achieves on abstractive text summarization for long documents. However it's capacity still limited to maximum of tokens, thus caused performance degradation on summarization for very long documents. Common method to deal with the issue is to truncate the documents. In this reasearch, we'll use different approach. We'll use the pretrained model by fine tuned the model on other domain dataset. First, we filter out all documents which length less than tokens to focus on very long documents. To prevent domain shifting problem and overfitting on transfer learning due to small dataset, we augment the dataset by splitting document-summary training pair into parts, to fit the document into tokens. Source code available on .
View on arXiv@article{fazry2025_2505.06862, title={ A Split-then-Join Approach to Abstractive Summarization for Very Long Documents in a Low Resource Setting }, author={ Lhuqita Fazry }, journal={arXiv preprint arXiv:2505.06862}, year={ 2025 } }