Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.12885
Cited By
ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension
30 October 2018
Sheng Zhang
Xiaodong Liu
Jingjing Liu
Jianfeng Gao
Kevin Duh
Benjamin Van Durme
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension"
50 / 86 papers shown
Title
MODP: Multi Objective Directional Prompting
Aashutosh Nema
Samaksh Gulati
Evangelos Giakoumakis
Bipana Thapaliya
LLMAG
54
0
0
25 Apr 2025
The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine Translation
Jie He
Tao Wang
Deyi Xiong
Qun Liu
ELM
LRM
82
27
0
05 Mar 2025
LoRTA: Low Rank Tensor Adaptation of Large Language Models
Ignacio Hounie
Charilaos I. Kanatsoulis
Arnuv Tandon
Alejandro Ribeiro
36
0
0
05 Oct 2024
CLOCR-C: Context Leveraging OCR Correction with Pre-trained Language Models
Jonathan Bourne
54
4
0
30 Aug 2024
Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning
Naibin Gu
Peng Fu
Xiyu Liu
Bowen Shen
Zheng-Shen Lin
Weiping Wang
38
6
0
06 Jun 2024
Evaluation of Retrieval-Augmented Generation: A Survey
Hao Yu
Aoran Gan
Kai Zhang
Shiwei Tong
Qi Liu
Zhaofeng Liu
3DV
62
83
0
13 May 2024
ChroniclingAmericaQA: A Large-scale Question Answering Dataset based on Historical American Newspaper Pages
Bhawna Piryani
Jamshid Mozafari
Adam Jatowt
RALM
48
8
0
26 Mar 2024
AGent: A Novel Pipeline for Automatically Creating Unanswerable Questions
Son Quoc Tran
Gia-Huy Do
Phong Nguyen-Thuan Do
Matt Kretchmar
Xinya Du
29
0
0
10 Sep 2023
Teaching Smaller Language Models To Generalise To Unseen Compositional Questions
Tim Hartill
N. Tan
Michael Witbrock
Patricia J. Riddle
ReLM
KELM
LRM
34
2
0
02 Aug 2023
NormBank: A Knowledge Bank of Situational Social Norms
Caleb Ziems
Jane Dwivedi-Yu
Yi-Chia Wang
A. Halevy
Diyi Yang
41
41
0
26 May 2023
Learning Easily Updated General Purpose Text Representations with Adaptable Task-Specific Prefixes
Kuan-Hao Huang
L Tan
Rui Hou
Sinong Wang
Amjad Almahairi
Ruty Rinott
AI4CE
33
0
0
22 May 2023
RWKV: Reinventing RNNs for the Transformer Era
Bo Peng
Eric Alcaide
Quentin G. Anthony
Alon Albalak
Samuel Arcadinho
...
Qihang Zhao
P. Zhou
Qinghua Zhou
Jian Zhu
Rui-Jie Zhu
90
562
0
22 May 2023
Prompting with Pseudo-Code Instructions
Mayank Mishra
Prince Kumar
Riyaz Ahmad Bhat
V. Rudramurthy
Danish Contractor
Srikanth G. Tamilselvam
45
13
0
19 May 2023
Towards More Robust NLP System Evaluation: Handling Missing Scores in Benchmarks
Anas Himmi
Ekhine Irurozki
Nathan Noiry
Stéphan Clémençon
Pierre Colombo
34
5
0
17 May 2023
What's the Meaning of Superhuman Performance in Today's NLU?
Simone Tedeschi
Johan Bos
T. Declerck
Jan Hajic
Daniel Hershcovich
...
Simon Krek
Steven Schockaert
Rico Sennrich
Ekaterina Shutova
Roberto Navigli
ELM
LM&MA
VLM
ReLM
LRM
39
26
0
15 May 2023
Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Anastasia Razdaibiedina
Yuning Mao
Rui Hou
Madian Khabsa
M. Lewis
Jimmy Ba
Amjad Almahairi
VLM
27
42
0
06 May 2023
BloombergGPT: A Large Language Model for Finance
Shijie Wu
Ozan Irsoy
Steven Lu
Vadim Dabravolski
Mark Dredze
Sebastian Gehrmann
P. Kambadur
David S. Rosenberg
Gideon Mann
AIFin
85
789
0
30 Mar 2023
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Zhen Wang
Yikang Shen
Leonid Karlinsky
Rogerio Feris
Huan Sun
Yoon Kim
VLM
VPVLM
44
107
0
06 Mar 2023
Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?
Chengwei Qin
Q. Li
Ruochen Zhao
Chenyu You
VLM
LRM
23
15
0
16 Feb 2023
Symbolic Discovery of Optimization Algorithms
Xiangning Chen
Chen Liang
Da Huang
Esteban Real
Kaiyuan Wang
...
Xuanyi Dong
Thang Luong
Cho-Jui Hsieh
Yifeng Lu
Quoc V. Le
67
353
0
13 Feb 2023
DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing
Conglong Li
Z. Yao
Xiaoxia Wu
Minjia Zhang
Connor Holmes
Cheng Li
Yuxiong He
27
24
0
07 Dec 2022
Random-LTD: Random and Layerwise Token Dropping Brings Efficient Training for Large-scale Transformers
Z. Yao
Xiaoxia Wu
Conglong Li
Connor Holmes
Minjia Zhang
Cheng-rong Li
Yuxiong He
28
11
0
17 Nov 2022
Task-aware Retrieval with Instructions
Akari Asai
Timo Schick
Patrick Lewis
Xilun Chen
Gautier Izacard
Sebastian Riedel
Hannaneh Hajishirzi
Wen-tau Yih
45
88
0
16 Nov 2022
RQUGE: Reference-Free Metric for Evaluating Question Generation by Answering the Question
Alireza Mohammadshahi
Thomas Scialom
Majid Yazdani
Pouya Yanki
Angela Fan
James Henderson
Marzieh Saeidi
31
20
0
02 Nov 2022
Exploring Mode Connectivity for Pre-trained Language Models
Yujia Qin
Cheng Qian
Jing Yi
Weize Chen
Yankai Lin
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
29
20
0
25 Oct 2022
Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning
Jing Yi
Weize Chen
Yujia Qin
Yankai Lin
Ning Ding
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
20
2
0
24 Oct 2022
Multi-CLS BERT: An Efficient Alternative to Traditional Ensembling
Haw-Shiuan Chang
Ruei-Yao Sun
Kathryn Ricci
Andrew McCallum
43
14
0
10 Oct 2022
Ask Me Anything: A simple strategy for prompting language models
Simran Arora
A. Narayan
Mayee F. Chen
Laurel J. Orr
Neel Guha
Kush S. Bhatia
Ines Chami
Frederic Sala
Christopher Ré
ReLM
LRM
235
208
0
05 Oct 2022
Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models
Zichun Yu
Tianyu Gao
Zhengyan Zhang
Yankai Lin
Zhiyuan Liu
Maosong Sun
Jie Zhou
VLM
LRM
38
1
0
20 Sep 2022
Why Do Neural Language Models Still Need Commonsense Knowledge to Handle Semantic Variations in Question Answering?
Sunjae Kwon
Cheongwoong Kang
Jiyeon Han
Jaesik Choi
29
0
0
01 Sep 2022
Reducing Retraining by Recycling Parameter-Efficient Prompts
Brian Lester
Joshua Yurtsever
Siamak Shakeri
Noah Constant
8
10
0
10 Aug 2022
Few-shot Adaptation Works with UnpredicTable Data
Jun Shern Chan
Michael Pieler
Jonathan Jao
Jérémy Scheurer
Ethan Perez
31
5
0
01 Aug 2022
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers
Z. Yao
Reza Yazdani Aminabadi
Minjia Zhang
Xiaoxia Wu
Conglong Li
Yuxiong He
VLM
MQ
73
444
0
04 Jun 2022
Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts
Qinyuan Ye
Juan Zha
Xiang Ren
MoE
18
12
0
25 May 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
130
100
0
24 May 2022
On the Role of Bidirectionality in Language Model Pre-Training
Mikel Artetxe
Jingfei Du
Naman Goyal
Luke Zettlemoyer
Ves Stoyanov
30
16
0
24 May 2022
Down and Across: Introducing Crossword-Solving as a New NLP Benchmark
Saurabh Kulshreshtha
Olga Kovaleva
Namrata Shivagunde
Anna Rumshisky
ELM
LRM
32
4
0
20 May 2022
Improving In-Context Few-Shot Learning via Self-Supervised Training
Mingda Chen
Jingfei Du
Ramakanth Pasunuru
Todor Mihaylov
Srini Iyer
Ves Stoyanov
Zornitsa Kozareva
SSL
AI4MH
38
64
0
03 May 2022
On the Limitations of Dataset Balancing: The Lost Battle Against Spurious Correlations
Roy Schwartz
Gabriel Stanovsky
37
26
0
27 Apr 2022
Zero-shot Entity and Tweet Characterization with Designed Conditional Prompts and Contexts
S. Srivatsa
Tushar Mohan
Kumari Neha
Nishchay Malakar
Ponnurangam Kumaraguru
Srinath Srinivasa
33
0
0
18 Apr 2022
WOODS: Benchmarks for Out-of-Distribution Generalization in Time Series
Jean-Christophe Gagnon-Audet
Kartik Ahuja
Mohammad Javad Darvishi Bayazi
Pooneh Mousavi
G. Dumas
Irina Rish
OOD
CML
AI4TS
32
29
0
18 Mar 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
32
196
0
14 Mar 2022
Efficient Language Modeling with Sparse all-MLP
Ping Yu
Mikel Artetxe
Myle Ott
Sam Shleifer
Hongyu Gong
Ves Stoyanov
Xian Li
MoE
23
11
0
14 Mar 2022
ST-MoE: Designing Stable and Transferable Sparse Expert Models
Barret Zoph
Irwan Bello
Sameer Kumar
Nan Du
Yanping Huang
J. Dean
Noam M. Shazeer
W. Fedus
MoE
24
182
0
17 Feb 2022
Conversational Agents: Theory and Applications
M. Wahde
M. Virgolin
LLMAG
26
24
0
07 Feb 2022
Efficient Large Scale Language Modeling with Mixtures of Experts
Mikel Artetxe
Shruti Bhosale
Naman Goyal
Todor Mihaylov
Myle Ott
...
Jeff Wang
Luke Zettlemoyer
Mona T. Diab
Zornitsa Kozareva
Ves Stoyanov
MoE
61
188
0
20 Dec 2021
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
Pengcheng He
Jianfeng Gao
Weizhu Chen
35
1,120
0
18 Nov 2021
Few-Shot Self-Rationalization with Natural Language Prompts
Ana Marasović
Iz Beltagy
Doug Downey
Matthew E. Peters
LRM
26
106
0
16 Nov 2021
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
Subhabrata Mukherjee
Xiaodong Liu
Guoqing Zheng
Saghar Hosseini
Hao Cheng
Greg Yang
Christopher Meek
Ahmed Hassan Awadallah
Jianfeng Gao
ELM
33
11
0
04 Nov 2021
MetaICL: Learning to Learn In Context
Sewon Min
M. Lewis
Luke Zettlemoyer
Hannaneh Hajishirzi
LRM
70
467
0
29 Oct 2021
1
2
Next