ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.08777
  4. Cited By
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive
  Summarization
v1v2v3 (latest)

PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

18 December 2019
Jingqing Zhang
Yao-Min Zhao
Mohammad Saleh
Peter J. Liu
    RALM3DGS
ArXiv (abs)PDFHTML

Papers citing "PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization"

12 / 1,012 papers shown
Title
Abstractive Summarization with Combination of Pre-trained
  Sequence-to-Sequence and Saliency Models
Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models
Itsumi Saito
Kyosuke Nishida
Kosuke Nishida
J. Tomita
77
29
0
29 Mar 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MAVLM
401
1,500
0
18 Mar 2020
Document Ranking with a Pretrained Sequence-to-Sequence Model
Document Ranking with a Pretrained Sequence-to-Sequence Model
Rodrigo Nogueira
Zhiying Jiang
Jimmy J. Lin
102
587
0
14 Mar 2020
The growing amplification of social media: Measuring temporal and social
  contagion dynamics for over 150 languages on Twitter for 2009-2020
The growing amplification of social media: Measuring temporal and social contagion dynamics for over 150 languages on Twitter for 2009-2020
Thayer Alshaabi
D. R. Dewhurst
J. Minot
M. V. Arnold
J. L. Adams
C. Danforth
P. Dodds
98
18
0
07 Mar 2020
Modelling Latent Skills for Multitask Language Generation
Modelling Latent Skills for Multitask Language Generation
Kris Cao
Dani Yogatama
39
3
0
21 Feb 2020
Learning by Semantic Similarity Makes Abstractive Summarization Better
Learning by Semantic Similarity Makes Abstractive Summarization Better
Wonjin Yoon
Yoonsun Yeo
Minbyul Jeong
Bong-Jun Yi
Jaewoo Kang
150
16
0
18 Feb 2020
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework
  for Natural Language Generation
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
Dongling Xiao
Han Zhang
Yukun Li
Yu Sun
Hao Tian
Hua Wu
Haifeng Wang
85
127
0
26 Jan 2020
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence
  Pre-training
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Weizhen Qi
Yu Yan
Yeyun Gong
Dayiheng Liu
Nan Duan
Jiusheng Chen
Ruofei Zhang
Ming Zhou
AI4TS
143
450
0
13 Jan 2020
Leveraging Lead Bias for Zero-shot Abstractive News Summarization
Leveraging Lead Bias for Zero-shot Abstractive News Summarization
Chenguang Zhu
Ziyi Yang
R. Gmyr
Michael Zeng
Xuedong Huang
71
19
0
25 Dec 2019
On Extractive and Abstractive Neural Document Summarization with
  Transformer Language Models
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
Sandeep Subramanian
Raymond Li
Jonathan Pilault
C. Pal
295
222
0
07 Sep 2019
Facet-Aware Evaluation for Extractive Summarization
Facet-Aware Evaluation for Extractive Summarization
Yuning Mao
Liyuan Liu
Qi Zhu
Xiang Ren
Jiawei Han
CVBM
78
20
0
27 Aug 2019
Neural Abstractive Text Summarization with Sequence-to-Sequence Models
Neural Abstractive Text Summarization with Sequence-to-Sequence Models
Tian Shi
Yaser Keneshloo
Naren Ramakrishnan
Chandan K. Reddy
140
234
0
05 Dec 2018
Previous
123...192021