Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2011.07956
Cited By
Pre-training Text-to-Text Transformers for Concept-centric Common Sense
24 October 2020
Wangchunshu Zhou
Dong-Ho Lee
Ravi Kiran Selvam
Seyeon Lee
Bill Yuchen Lin
Xiang Ren
LRM
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pre-training Text-to-Text Transformers for Concept-centric Common Sense"
13 / 13 papers shown
Title
MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective
Zhe Hu
Hou Pong Chan
Lifu Huang
27
8
0
26 Oct 2022
Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs
Maarten Sap
Ronan Le Bras
Daniel Fried
Yejin Choi
27
209
0
24 Oct 2022
Retrieval Augmentation for Commonsense Reasoning: A Unified Approach
W. Yu
Chenguang Zhu
Zhihan Zhang
Shuohang Wang
Zhuosheng Zhang
Yuwei Fang
Meng Jiang
LRM
ReLM
15
19
0
23 Oct 2022
VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
Wangchunshu Zhou
Yan Zeng
Shizhe Diao
Xinsong Zhang
CoGe
VLM
32
13
0
30 May 2022
Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach
Chao Zhao
Faeze Brahman
Tenghao Huang
Snigdha Chaturvedi
LRM
24
3
0
26 May 2022
Go Back in Time: Generating Flashbacks in Stories with Event Temporal Prompts
Rujun Han
Hong Chen
Yufei Tian
Nanyun Peng
16
18
0
04 May 2022
Knowledge Infused Decoding
Ruibo Liu
Guoqing Zheng
Shashank Gupta
Radhika Gaonkar
Chongyang Gao
Soroush Vosoughi
Milad Shokouhi
Ahmed Hassan Awadallah
KELM
25
14
0
06 Apr 2022
ClarET: Pre-training a Correlation-Aware Context-To-Event Transformer for Event-Centric Generation and Classification
Yucheng Zhou
Tao Shen
Xiubo Geng
Guodong Long
Daxin Jiang
24
57
0
04 Mar 2022
KGR^4: Retrieval, Retrospect, Refine and Rethink for Commonsense Generation
Xin Liu
Dayiheng Liu
Baosong Yang
Haibo Zhang
Junwei Ding
Wenqing Yao
Weihua Luo
Haiying Zhang
Jinsong Su
LRM
32
8
0
15 Dec 2021
Interactive Model with Structural Loss for Language-based Abductive Reasoning
Linhao Li
Ming Xu
Yongfeng Dong
Xin Li
Ao Wang
20
2
0
01 Dec 2021
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting
Wangchunshu Zhou
Tao Ge
Canwen Xu
Ke Xu
Furu Wei
LRM
16
15
0
02 Jan 2021
Knowledge Enhanced Contextual Word Representations
Matthew E. Peters
Mark Neumann
IV RobertL.Logan
Roy Schwartz
Vidur Joshi
Sameer Singh
Noah A. Smith
234
656
0
09 Sep 2019
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
419
2,588
0
03 Sep 2019
1