ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.07058
  4. Cited By
Predicting Discourse Trees from Transformer-based Neural Summarizers

Predicting Discourse Trees from Transformer-based Neural Summarizers

14 April 2021
Wen Xiao
Patrick Huber
Giuseppe Carenini
ArXiv (abs)PDFHTML

Papers citing "Predicting Discourse Trees from Transformer-based Neural Summarizers"

16 / 16 papers shown
Title
Do We Really Need That Many Parameters In Transformer For Extractive
  Summarization? Discourse Can Help !
Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help !
Wen Xiao
Patrick Huber
Giuseppe Carenini
56
13
0
03 Dec 2020
MEGA RST Discourse Treebanks with Structure and Nuclearity from Scalable
  Distant Sentiment Supervision
MEGA RST Discourse Treebanks with Structure and Nuclearity from Scalable Distant Sentiment Supervision
Patrick Huber
Giuseppe Carenini
53
18
0
05 Nov 2020
Language Models are Open Knowledge Graphs
Language Models are Open Knowledge Graphs
Chenguang Wang
Xiao Liu
Basel Alomair
SSLKELM
71
137
0
22 Oct 2020
Predicting Discourse Structure using Distant Supervision from Sentiment
Predicting Discourse Structure using Distant Supervision from Sentiment
Patrick Huber
Giuseppe Carenini
61
26
0
30 Oct 2019
Discourse-Aware Neural Extractive Text Summarization
Discourse-Aware Neural Extractive Text Summarization
Jiacheng Xu
Zhe Gan
Yu Cheng
Jingjing Liu
BDL
147
283
0
30 Oct 2019
Extractive Summarization of Long Documents by Combining Global and Local
  Context
Extractive Summarization of Long Documents by Combining Global and Local Context
Wen Xiao
Giuseppe Carenini
104
153
0
17 Sep 2019
Text Summarization with Pretrained Encoders
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
465
1,452
0
22 Aug 2019
From Balustrades to Pierre Vinken: Looking for Syntax in Transformer
  Self-Attentions
From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-Attentions
David Marecek
Rudolf Rosa
54
52
0
05 Jun 2019
Evaluating Discourse in Structured Text Representations
Evaluating Discourse in Structured Text Representations
Elisa Ferracane
Greg Durrett
Junyi Jessy Li
K. Erk
61
30
0
04 Jun 2019
Hierarchical Transformers for Multi-Document Summarization
Hierarchical Transformers for Multi-Document Summarization
Yang Liu
Mirella Lapata
121
298
0
30 May 2019
HIBERT: Document Level Pre-training of Hierarchical Bidirectional
  Transformers for Document Summarization
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
Xingxing Zhang
Furu Wei
M. Zhou
89
379
0
16 May 2019
Linguistic Knowledge and Transferability of Contextual Representations
Linguistic Knowledge and Transferability of Contextual Representations
Nelson F. Liu
Matt Gardner
Yonatan Belinkov
Matthew E. Peters
Noah A. Smith
137
735
0
21 Mar 2019
A Discourse-Aware Attention Model for Abstractive Summarization of Long
  Documents
A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
Arman Cohan
Franck Dernoncourt
Doo Soon Kim
Trung Bui
Seokhwan Kim
W. Chang
Nazli Goharian
491
763
0
16 Apr 2018
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
803
132,454
0
12 Jun 2017
Learning Structured Text Representations
Learning Structured Text Representations
Yang Liu
Mirella Lapata
99
153
0
25 May 2017
Abstractive Text Summarization Using Sequence-to-Sequence RNNs and
  Beyond
Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond
Ramesh Nallapati
Bowen Zhou
Cicero Nogueira dos Santos
Çağlar Gülçehre
Bing Xiang
AIMat
281
2,567
0
19 Feb 2016
1