Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2004.14074
Cited By
Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning
29 April 2020
Alexandre Tamborrino
Nicola Pellicanò
B. Pannier
Pascal Voitot
Louise Naudin
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning"
15 / 15 papers shown
Title
Automated Program Repair: Emerging trends pose and expose problems for benchmarks
J. Renzullo
Pemma Reiter
Westley Weimer
Stephanie Forrest
42
1
0
08 May 2024
Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask Questions
Ziyue Wang
Chi Chen
Peng Li
Yang Liu
LRM
23
14
0
20 Nov 2023
VLIS: Unimodal Language Models Guide Multimodal Language Generation
Jiwan Chung
Youngjae Yu
VLM
30
1
0
15 Oct 2023
Multi-hop Commonsense Knowledge Injection Framework for Zero-Shot Commonsense Question Answering
Xin Guan
Biwei Cao
Qingqing Gao
Zheng Yin
Bo Liu
Jiuxin Cao
26
5
0
10 May 2023
Natural Language Reasoning, A Survey
Fei Yu
Hongbo Zhang
Prayag Tiwari
Benyou Wang
ReLM
LRM
49
53
0
26 Mar 2023
Event knowledge in large language models: the gap between the impossible and the unlikely
Carina Kauf
Anna A. Ivanova
Giulia Rambelli
Emmanuele Chersoni
Jingyuan Selena She
Zawad Chowdhury
Evelina Fedorenko
Alessandro Lenci
37
67
0
02 Dec 2022
Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning
Letian Peng
Z. Li
Hai Zhao
ReLM
LRM
18
1
0
23 Aug 2022
LogiGAN: Learning Logical Reasoning via Adversarial Pre-training
Xinyu Pi
Wanjun Zhong
Yan Gao
Nan Duan
Jian-Guang Lou
NAI
GAN
LRM
AI4CE
41
16
0
18 May 2022
What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?
Thomas Wang
Adam Roberts
Daniel Hesslow
Teven Le Scao
Hyung Won Chung
Iz Beltagy
Julien Launay
Colin Raffel
39
167
0
12 Apr 2022
Rethinking Why Intermediate-Task Fine-Tuning Works
Ting-Yun Chang
Chi-Jen Lu
LRM
19
29
0
26 Aug 2021
REPT: Bridging Language Models and Machine Reading Comprehension via Retrieval-Based Pre-training
Fangkai Jiao
Yangyang Guo
Yilin Niu
Feng Ji
Feng-Lin Li
Liqiang Nie
LRM
34
12
0
10 May 2021
Is Incoherence Surprising? Targeted Evaluation of Coherence Prediction from Language Models
Anne Beyer
Sharid Loáiciga
David Schlangen
27
15
0
07 May 2021
Relational World Knowledge Representation in Contextual Language Models: A Review
Tara Safavi
Danai Koutra
KELM
35
51
0
12 Apr 2021
Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision
Faeze Brahman
Vered Shwartz
Rachel Rudinger
Yejin Choi
LRM
15
42
0
14 Dec 2020
Improving Event Duration Prediction via Time-aware Pre-training
Zonglin Yang
Xinya Du
Alexander M. Rush
Claire Cardie
VLM
14
19
0
05 Nov 2020
1