ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.04953
  4. Cited By
Does Pretraining for Summarization Require Knowledge Transfer?

Does Pretraining for Summarization Require Knowledge Transfer?

10 September 2021
Kundan Krishna
Jeffrey P. Bigham
Zachary Chase Lipton
ArXivPDFHTML

Papers citing "Does Pretraining for Summarization Require Knowledge Transfer?"

14 / 14 papers shown
Title
Responsible AI Considerations in Text Summarization Research: A Review
  of Current Practices
Responsible AI Considerations in Text Summarization Research: A Review of Current Practices
Yu Lu Liu
Meng Cao
Su Lin Blodgett
Jackie Chi Kit Cheung
Alexandra Olteanu
Adam Trischler
36
1
0
18 Nov 2023
Understanding the Role of Input Token Characters in Language Models: How
  Does Information Loss Affect Performance?
Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance?
Ahmed Alajrami
Katerina Margatina
Nikolaos Aletras
AAML
19
1
0
26 Oct 2023
Revisiting Hidden Representations in Transfer Learning for Medical
  Imaging
Revisiting Hidden Representations in Transfer Learning for Medical Imaging
Dovile Juodelyte
Amelia Jiménez-Sánchez
V. Cheplygina
OOD
19
1
0
16 Feb 2023
On the Role of Parallel Data in Cross-lingual Transfer Learning
On the Role of Parallel Data in Cross-lingual Transfer Learning
Machel Reid
Mikel Artetxe
21
10
0
20 Dec 2022
Synthetic Pre-Training Tasks for Neural Machine Translation
Synthetic Pre-Training Tasks for Neural Machine Translation
Zexue He
Graeme W. Blackwood
Yikang Shen
Julian McAuley
Rogerio Feris
26
3
0
19 Dec 2022
Downstream Datasets Make Surprisingly Good Pretraining Corpora
Downstream Datasets Make Surprisingly Good Pretraining Corpora
Kundan Krishna
Saurabh Garg
Jeffrey P. Bigham
Zachary Chase Lipton
48
30
0
28 Sep 2022
MonoByte: A Pool of Monolingual Byte-level Language Models
MonoByte: A Pool of Monolingual Byte-level Language Models
Hugo Queiroz Abonizio
Leandro Rodrigues de Souza
R. Lotufo
Rodrigo Nogueira
40
1
0
22 Sep 2022
Insights into Pre-training via Simpler Synthetic Tasks
Insights into Pre-training via Simpler Synthetic Tasks
Yuhuai Wu
Felix Li
Percy Liang
AIMat
26
20
0
21 Jun 2022
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language
  Understanding and Generation
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
49
27
0
30 May 2022
Fusing finetuned models for better pretraining
Fusing finetuned models for better pretraining
Leshem Choshen
Elad Venezian
Noam Slonim
Yoav Katz
FedML
AI4CE
MoMe
54
87
0
06 Apr 2022
Measuring the Impact of Individual Domain Factors in Self-Supervised
  Pre-Training
Measuring the Impact of Individual Domain Factors in Self-Supervised Pre-Training
Ramon Sanabria
Wei-Ning Hsu
Alexei Baevski
Michael Auli
19
7
0
01 Mar 2022
Are Large-scale Datasets Necessary for Self-Supervised Pre-training?
Are Large-scale Datasets Necessary for Self-Supervised Pre-training?
Alaaeldin El-Nouby
Gautier Izacard
Hugo Touvron
Ivan Laptev
Hervé Jégou
Edouard Grave
SSL
27
149
0
20 Dec 2021
ScisummNet: A Large Annotated Corpus and Content-Impact Models for
  Scientific Paper Summarization with Citation Networks
ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks
Michihiro Yasunaga
Jungo Kasai
Rui Zhang
Alexander R. Fabbri
Irene Z Li
Dan Friedman
Dragomir R. Radev
73
206
0
04 Sep 2019
Text Summarization with Pretrained Encoders
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
258
1,433
0
22 Aug 2019
1