ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06862
25
0

A Split-then-Join Approach to Abstractive Summarization for Very Long Documents in a Low Resource Setting

11 May 2025
Lhuqita Fazry
    VLM
ArXivPDFHTML
Abstract

BIGBIRD-PEGASUS\texttt{BIGBIRD-PEGASUS}BIGBIRD-PEGASUS model achieves state-of-the-art\textit{state-of-the-art}state-of-the-art on abstractive text summarization for long documents. However it's capacity still limited to maximum of 4,0964,0964,096 tokens, thus caused performance degradation on summarization for very long documents. Common method to deal with the issue is to truncate the documents. In this reasearch, we'll use different approach. We'll use the pretrained BIGBIRD-PEGASUS\texttt{BIGBIRD-PEGASUS}BIGBIRD-PEGASUS model by fine tuned the model on other domain dataset. First, we filter out all documents which length less than 20,00020,00020,000 tokens to focus on very long documents. To prevent domain shifting problem and overfitting on transfer learning due to small dataset, we augment the dataset by splitting document-summary training pair into parts, to fit the document into 4,0964,0964,096 tokens. Source code available on \href\href{this https URL}{this https URL}\href.

View on arXiv
@article{fazry2025_2505.06862,
  title={ A Split-then-Join Approach to Abstractive Summarization for Very Long Documents in a Low Resource Setting },
  author={ Lhuqita Fazry },
  journal={arXiv preprint arXiv:2505.06862},
  year={ 2025 }
}
Comments on this paper