ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.07956
  4. Cited By
Pre-training Text-to-Text Transformers for Concept-centric Common Sense

Pre-training Text-to-Text Transformers for Concept-centric Common Sense

24 October 2020
Wangchunshu Zhou
Dong-Ho Lee
Ravi Kiran Selvam
Seyeon Lee
Bill Yuchen Lin
Xiang Ren
    LRM
    VLM
ArXivPDFHTML

Papers citing "Pre-training Text-to-Text Transformers for Concept-centric Common Sense"

13 / 13 papers shown
Title
MOCHA: A Multi-Task Training Approach for Coherent Text Generation from
  Cognitive Perspective
MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective
Zhe Hu
Hou Pong Chan
Lifu Huang
27
8
0
26 Oct 2022
Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs
Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs
Maarten Sap
Ronan Le Bras
Daniel Fried
Yejin Choi
27
209
0
24 Oct 2022
Retrieval Augmentation for Commonsense Reasoning: A Unified Approach
Retrieval Augmentation for Commonsense Reasoning: A Unified Approach
W. Yu
Chenguang Zhu
Zhihan Zhang
Shuohang Wang
Zhuosheng Zhang
Yuwei Fang
Meng Jiang
LRM
ReLM
15
19
0
23 Oct 2022
VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
Wangchunshu Zhou
Yan Zeng
Shizhe Diao
Xinsong Zhang
CoGe
VLM
32
13
0
30 May 2022
Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach
Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach
Chao Zhao
Faeze Brahman
Tenghao Huang
Snigdha Chaturvedi
LRM
24
3
0
26 May 2022
Go Back in Time: Generating Flashbacks in Stories with Event Temporal
  Prompts
Go Back in Time: Generating Flashbacks in Stories with Event Temporal Prompts
Rujun Han
Hong Chen
Yufei Tian
Nanyun Peng
16
18
0
04 May 2022
Knowledge Infused Decoding
Knowledge Infused Decoding
Ruibo Liu
Guoqing Zheng
Shashank Gupta
Radhika Gaonkar
Chongyang Gao
Soroush Vosoughi
Milad Shokouhi
Ahmed Hassan Awadallah
KELM
25
14
0
06 Apr 2022
ClarET: Pre-training a Correlation-Aware Context-To-Event Transformer
  for Event-Centric Generation and Classification
ClarET: Pre-training a Correlation-Aware Context-To-Event Transformer for Event-Centric Generation and Classification
Yucheng Zhou
Tao Shen
Xiubo Geng
Guodong Long
Daxin Jiang
24
57
0
04 Mar 2022
KGR^4: Retrieval, Retrospect, Refine and Rethink for Commonsense
  Generation
KGR^4: Retrieval, Retrospect, Refine and Rethink for Commonsense Generation
Xin Liu
Dayiheng Liu
Baosong Yang
Haibo Zhang
Junwei Ding
Wenqing Yao
Weihua Luo
Haiying Zhang
Jinsong Su
LRM
32
8
0
15 Dec 2021
Interactive Model with Structural Loss for Language-based Abductive
  Reasoning
Interactive Model with Structural Loss for Language-based Abductive Reasoning
Linhao Li
Ming Xu
Yongfeng Dong
Xin Li
Ao Wang
20
2
0
01 Dec 2021
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting
Wangchunshu Zhou
Tao Ge
Canwen Xu
Ke Xu
Furu Wei
LRM
16
15
0
02 Jan 2021
Knowledge Enhanced Contextual Word Representations
Knowledge Enhanced Contextual Word Representations
Matthew E. Peters
Mark Neumann
IV RobertL.Logan
Roy Schwartz
Vidur Joshi
Sameer Singh
Noah A. Smith
234
656
0
09 Sep 2019
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
419
2,588
0
03 Sep 2019
1