ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10044
  4. Cited By
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

24 May 2019
Christopher Clark
Kenton Lee
Ming-Wei Chang
Tom Kwiatkowski
Michael Collins
Kristina Toutanova
ArXivPDFHTML

Papers citing "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions"

50 / 1,041 papers shown
Title
U3E: Unsupervised and Erasure-based Evidence Extraction for Machine
  Reading Comprehension
U3E: Unsupervised and Erasure-based Evidence Extraction for Machine Reading Comprehension
Suzhe He
Shumin Shi
Chenghao Wu
46
0
0
06 Oct 2022
Ask Me Anything: A simple strategy for prompting language models
Ask Me Anything: A simple strategy for prompting language models
Simran Arora
A. Narayan
Mayee F. Chen
Laurel J. Orr
Neel Guha
Kush S. Bhatia
Ines Chami
Frederic Sala
Christopher Ré
ReLM
LRM
235
208
0
05 Oct 2022
Automatic Label Sequence Generation for Prompting Sequence-to-sequence
  Models
Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models
Zichun Yu
Tianyu Gao
Zhengyan Zhang
Yankai Lin
Zhiyuan Liu
Maosong Sun
Jie Zhou
VLM
LRM
38
1
0
20 Sep 2022
A Multi-turn Machine Reading Comprehension Framework with Rethink
  Mechanism for Emotion-Cause Pair Extraction
A Multi-turn Machine Reading Comprehension Framework with Rethink Mechanism for Emotion-Cause Pair Extraction
Changzhi Zhou
Dandan Song
Jing Xu
Zhijing Wu
32
11
0
16 Sep 2022
Activity report analysis with automatic single or multispan answer
  extraction
Activity report analysis with automatic single or multispan answer extraction
R. Choudhary
A. Sridhar
Erik M. Visser
24
1
0
09 Sep 2022
Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language
  Understanding
Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding
Zhaoye Fei
Yu Tian
Yongkang Wu
Xinyu Zhang
Yutao Zhu
...
Dejiang Kong
Ruofei Lai
Bo Zhao
Zhicheng Dou
Xipeng Qiu
110
1
0
19 Aug 2022
Few-shot Adaptation Works with UnpredicTable Data
Few-shot Adaptation Works with UnpredicTable Data
Jun Shern Chan
Michael Pieler
Jonathan Jao
Jérémy Scheurer
Ethan Perez
31
5
0
01 Aug 2022
Analyzing Bagging Methods for Language Models
Analyzing Bagging Methods for Language Models
Pranab Islam
Shaan Khosla
Arthur Lok
Mudit Saxena
UQCV
MoE
ELM
22
1
0
19 Jul 2022
Rationale-Augmented Ensembles in Language Models
Rationale-Augmented Ensembles in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Denny Zhou
ReLM
LRM
38
124
0
02 Jul 2022
Modern Question Answering Datasets and Benchmarks: A Survey
Modern Question Answering Datasets and Benchmarks: A Survey
Zhen Wang
44
23
0
30 Jun 2022
Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks
Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks
Jiasen Lu
Christopher Clark
Rowan Zellers
Roozbeh Mottaghi
Aniruddha Kembhavi
ObjD
VLM
MLLM
74
393
0
17 Jun 2022
GAAMA 2.0: An Integrated System that Answers Boolean and Extractive
  Questions
GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions
Scott McCarley
Mihaela A. Bornea
Sara Rosenthal
Anthony Ferritto
Md Arafat Sultan
Avirup Sil
Radu Florian
14
1
0
16 Jun 2022
Language Models are General-Purpose Interfaces
Language Models are General-Purpose Interfaces
Y. Hao
Haoyu Song
Li Dong
Shaohan Huang
Zewen Chi
Wenhui Wang
Shuming Ma
Furu Wei
MLLM
30
96
0
13 Jun 2022
Instance-wise Prompt Tuning for Pretrained Language Models
Instance-wise Prompt Tuning for Pretrained Language Models
Yuezihan Jiang
Hao Yang
Junyang Lin
Hanyu Zhao
An Yang
Chang Zhou
Hongxia Yang
Zhi-Xin Yang
Bin Cui
VLM
41
7
0
04 Jun 2022
ZeroQuant: Efficient and Affordable Post-Training Quantization for
  Large-Scale Transformers
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers
Z. Yao
Reza Yazdani Aminabadi
Minjia Zhang
Xiaoxia Wu
Conglong Li
Yuxiong He
VLM
MQ
73
444
0
04 Jun 2022
Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained
  Models
Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models
Mengzhou Xia
Mikel Artetxe
Jingfei Du
Danqi Chen
Ves Stoyanov
32
6
0
30 May 2022
Eliciting and Understanding Cross-Task Skills with Task-Level
  Mixture-of-Experts
Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts
Qinyuan Ye
Juan Zha
Xiang Ren
MoE
18
12
0
25 May 2022
Rethinking Fano's Inequality in Ensemble Learning
Rethinking Fano's Inequality in Ensemble Learning
Terufumi Morishita
Gaku Morio
Shota Horiguchi
Hiroaki Ozaki
N. Nukaga
FedML
11
3
0
25 May 2022
Is a Question Decomposition Unit All We Need?
Is a Question Decomposition Unit All We Need?
Pruthvi H. Patel
Swaroop Mishra
Mihir Parmar
Chitta Baral
ReLM
158
51
0
25 May 2022
ClaimDiff: Comparing and Contrasting Claims on Contentious Issues
ClaimDiff: Comparing and Contrasting Claims on Contentious Issues
Miyoung Ko
Ingyu Seong
Hwaran Lee
Joonsuk Park
Minsuk Chang
Minjoon Seo
46
3
0
24 May 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures
  of Soft Prompts
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
130
100
0
24 May 2022
Vector-Quantized Input-Contextualized Soft Prompts for Natural Language
  Understanding
Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding
Rishabh Bhardwaj
Amrita Saha
Guosheng Lin
Soujanya Poria
VLM
VPVLM
25
7
0
23 May 2022
Calibration of Natural Language Understanding Models with Venn--ABERS
  Predictors
Calibration of Natural Language Understanding Models with Venn--ABERS Predictors
Patrizio Giovannotti
38
6
0
21 May 2022
Learning Rate Curriculum
Learning Rate Curriculum
Florinel-Alin Croitoru
Nicolae-Cătălin Ristea
Radu Tudor Ionescu
N. Sebe
19
9
0
18 May 2022
Making Pretrained Language Models Good Long-tailed Learners
Making Pretrained Language Models Good Long-tailed Learners
Chen Zhang
Lei Ren
Jingang Wang
Wei Wu
Dawei Song
RALM
VLM
30
8
0
11 May 2022
ProQA: Structural Prompt-based Pre-training for Unified Question
  Answering
ProQA: Structural Prompt-based Pre-training for Unified Question Answering
Wanjun Zhong
Yifan Gao
Ning Ding
Yujia Qin
Zhiyuan Liu
Ming Zhou
Jiahai Wang
Jian Yin
Nan Duan
20
34
0
09 May 2022
Improving In-Context Few-Shot Learning via Self-Supervised Training
Improving In-Context Few-Shot Learning via Self-Supervised Training
Mingda Chen
Jingfei Du
Ramakanth Pasunuru
Todor Mihaylov
Srini Iyer
Ves Stoyanov
Zornitsa Kozareva
SSL
AI4MH
38
64
0
03 May 2022
Science Checker: Extractive-Boolean Question Answering For Scientific
  Fact Checking
Science Checker: Extractive-Boolean Question Answering For Scientific Fact Checking
Loïc Rakotoson
Charles Letaillieur
S. Massip
F. Laleye
20
4
0
26 Apr 2022
Zero-shot Entity and Tweet Characterization with Designed Conditional
  Prompts and Contexts
Zero-shot Entity and Tweet Characterization with Designed Conditional Prompts and Contexts
S. Srivatsa
Tushar Mohan
Kumari Neha
Nishchay Malakar
Ponnurangam Kumaraguru
Srinath Srinivasa
33
0
0
18 Apr 2022
Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners
Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners
Shashank Gupta
Subhabrata Mukherjee
K. Subudhi
Eduardo Gonzalez
Damien Jose
Ahmed Hassan Awadallah
Jianfeng Gao
MoE
27
49
0
16 Apr 2022
What Language Model Architecture and Pretraining Objective Work Best for
  Zero-Shot Generalization?
What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?
Thomas Wang
Adam Roberts
Daniel Hesslow
Teven Le Scao
Hyung Won Chung
Iz Beltagy
Julien Launay
Colin Raffel
39
167
0
12 Apr 2022
KOBEST: Korean Balanced Evaluation of Significant Tasks
KOBEST: Korean Balanced Evaluation of Significant Tasks
Dohyeong Kim
Myeongjun Jang
D. Kwon
Eric Davis
ALM
16
23
0
09 Apr 2022
Fusing finetuned models for better pretraining
Fusing finetuned models for better pretraining
Leshem Choshen
Elad Venezian
Noam Slonim
Yoav Katz
FedML
AI4CE
MoMe
54
87
0
06 Apr 2022
Training Compute-Optimal Large Language Models
Training Compute-Optimal Large Language Models
Jordan Hoffmann
Sebastian Borgeaud
A. Mensch
Elena Buchatskaya
Trevor Cai
...
Karen Simonyan
Erich Elsen
Jack W. Rae
Oriol Vinyals
Laurent Sifre
AI4TS
69
1,846
0
29 Mar 2022
REx: Data-Free Residual Quantization Error Expansion
REx: Data-Free Residual Quantization Error Expansion
Edouard Yvinec
Arnaud Dapgony
Matthieu Cord
Kévin Bailly
MQ
33
8
0
28 Mar 2022
UKP-SQUARE: An Online Platform for Question Answering Research
UKP-SQUARE: An Online Platform for Question Answering Research
Tim Baumgärtner
Kexin Wang
Rachneet Sachdeva
Max Eichler
Gregor Geigle
...
Leonardo F. R. Ribeiro
Jonas Pfeiffer
Nils Reimers
Gözde Gül Sahin
Iryna Gurevych
25
7
0
25 Mar 2022
ZS4IE: A toolkit for Zero-Shot Information Extraction with simple
  Verbalizations
ZS4IE: A toolkit for Zero-Shot Information Extraction with simple Verbalizations
Oscar Sainz
Haoling Qiu
Oier López de Lacalle
Eneko Agirre
Bonan Min
SyDa
31
12
0
25 Mar 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
326
3,273
0
21 Mar 2022
Word Order Does Matter (And Shuffled Language Models Know It)
Word Order Does Matter (And Shuffled Language Models Know It)
Vinit Ravishankar
Mostafa Abdou
Artur Kulmizev
Anders Søgaard
17
44
0
21 Mar 2022
KMIR: A Benchmark for Evaluating Knowledge Memorization, Identification
  and Reasoning Abilities of Language Models
KMIR: A Benchmark for Evaluating Knowledge Memorization, Identification and Reasoning Abilities of Language Models
Daniel Gao
Yantao Jia
Lei Li
Chengzhen Fu
Zhicheng Dou
Hao Jiang
Xinyu Zhang
Lei Chen
Bo Zhao
KELM
22
8
0
28 Feb 2022
UnifiedQA-v2: Stronger Generalization via Broader Cross-Format Training
UnifiedQA-v2: Stronger Generalization via Broader Cross-Format Training
Daniel Khashabi
Yeganeh Kordi
Hannaneh Hajishirzi
25
66
0
23 Feb 2022
A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
Da Yin
Li Dong
Hao Cheng
Xiaodong Liu
Kai-Wei Chang
Furu Wei
Jianfeng Gao
KELM
27
34
0
17 Feb 2022
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Guanzheng Chen
Fangyu Liu
Zaiqiao Meng
Shangsong Liang
26
88
0
16 Feb 2022
Russian SuperGLUE 1.1: Revising the Lessons not Learned by Russian NLP
  models
Russian SuperGLUE 1.1: Revising the Lessons not Learned by Russian NLP models
Alena Fenogenova
Maria Tikhonova
Vladislav Mikhailov
Tatiana Shavrina
Anton A. Emelyanov
Denis Shevelev
Alexander Kukushkin
Valentin Malykh
Ekaterina Artemova
AAML
VLM
ELM
36
2
0
15 Feb 2022
Scaling Laws Under the Microscope: Predicting Transformer Performance
  from Small Scale Experiments
Scaling Laws Under the Microscope: Predicting Transformer Performance from Small Scale Experiments
Maor Ivgi
Y. Carmon
Jonathan Berant
19
17
0
13 Feb 2022
Exploring the Limits of Domain-Adaptive Training for Detoxifying
  Large-Scale Language Models
Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Wei Ping
Ming-Yu Liu
Chaowei Xiao
P. Xu
M. Patwary
M. Shoeybi
Bo-wen Li
Anima Anandkumar
Bryan Catanzaro
25
65
0
08 Feb 2022
What are the best systems? New perspectives on NLP Benchmarking
What are the best systems? New perspectives on NLP Benchmarking
Pierre Colombo
Nathan Noiry
Ekhine Irurozki
Stéphan Clémençon
27
28
0
08 Feb 2022
Co-training Improves Prompt-based Learning for Large Language Models
Co-training Improves Prompt-based Learning for Large Language Models
Hunter Lang
Monica Agrawal
Yoon Kim
David Sontag
VLM
LRM
172
39
0
02 Feb 2022
Describing Differences between Text Distributions with Natural Language
Describing Differences between Text Distributions with Natural Language
Ruiqi Zhong
Charles Burton Snell
Dan Klein
Jacob Steinhardt
VLM
132
42
0
28 Jan 2022
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A
  Large-Scale Generative Language Model
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model
Shaden Smith
M. Patwary
Brandon Norick
P. LeGresley
Samyam Rajbhandari
...
M. Shoeybi
Yuxiong He
Michael Houston
Saurabh Tiwary
Bryan Catanzaro
MoE
90
732
0
28 Jan 2022
Previous
123...1718192021
Next