ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10044
  4. Cited By
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

24 May 2019
Christopher Clark
Kenton Lee
Ming-Wei Chang
Tom Kwiatkowski
Michael Collins
Kristina Toutanova
ArXiv (abs)PDFHTML

Papers citing "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions"

50 / 1,143 papers shown
Title
Scaling Data-Constrained Language Models
Scaling Data-Constrained Language Models
Niklas Muennighoff
Alexander M. Rush
Boaz Barak
Teven Le Scao
Aleksandra Piktus
Nouamane Tazi
S. Pyysalo
Thomas Wolf
Colin Raffel
ALM
178
226
0
25 May 2023
Self-Evolution Learning for Discriminative Language Model Pretraining
Self-Evolution Learning for Discriminative Language Model Pretraining
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
91
12
0
24 May 2023
Revisiting Token Dropping Strategy in Efficient BERT Pretraining
Revisiting Token Dropping Strategy in Efficient BERT Pretraining
Qihuang Zhong
Liang Ding
Juhua Liu
Xuebo Liu
Min Zhang
Bo Du
Dacheng Tao
VLM
73
10
0
24 May 2023
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model
  Fine-tuning
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning
Zhen-Ru Zhang
Chuanqi Tan
Haiyang Xu
Chengyu Wang
Jun Huang
Songfang Huang
73
38
0
24 May 2023
Universal Self-Adaptive Prompting
Universal Self-Adaptive Prompting
Xingchen Wan
Ruoxi Sun
Hootan Nakhost
H. Dai
Julian Martin Eisenschlos
Sercan O. Arik
Tomas Pfister
LRM
108
12
0
24 May 2023
Adapting Language Models to Compress Contexts
Adapting Language Models to Compress Contexts
Alexis Chevalier
Alexander Wettig
Anirudh Ajith
Danqi Chen
LLMAG
79
191
0
24 May 2023
Using Natural Language Explanations to Rescale Human Judgments
Using Natural Language Explanations to Rescale Human Judgments
Manya Wadhwa
Jifan Chen
Junyi Jessy Li
Greg Durrett
78
8
0
24 May 2023
Mastering the ABCDs of Complex Questions: Answer-Based Claim
  Decomposition for Fine-grained Self-Evaluation
Mastering the ABCDs of Complex Questions: Answer-Based Claim Decomposition for Fine-grained Self-Evaluation
Nishant Balepur
Jie Huang
Samraj Moorjani
Hari Sundaram
Kevin Chen-Chuan Chang
ReLM
43
0
0
24 May 2023
In-Context Demonstration Selection with Cross Entropy Difference
In-Context Demonstration Selection with Cross Entropy Difference
Dan Iter
Reid Pryzant
Ruochen Xu
Shuohang Wang
Yang Liu
Yichong Xu
Chenguang Zhu
73
14
0
24 May 2023
Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for
  Large Language Models
Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models
Sheng Shen
Le Hou
Yan-Quan Zhou
Nan Du
Shayne Longpre
...
Vincent Zhao
Hongkun Yu
Kurt Keutzer
Trevor Darrell
Denny Zhou
ALMMoE
105
60
0
24 May 2023
Few-shot Unified Question Answering: Tuning Models or Prompts?
Few-shot Unified Question Answering: Tuning Models or Prompts?
Srijan Bansal
Semih Yavuz
Bo Pang
Meghana Moorthy Bhat
Yingbo Zhou
97
2
0
23 May 2023
Learning Easily Updated General Purpose Text Representations with
  Adaptable Task-Specific Prefixes
Learning Easily Updated General Purpose Text Representations with Adaptable Task-Specific Prefixes
Kuan-Hao Huang
L Tan
Rui Hou
Sinong Wang
Amjad Almahairi
Ruty Rinott
AI4CE
78
0
0
22 May 2023
Measuring Inductive Biases of In-Context Learning with Underspecified
  Demonstrations
Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations
Chenglei Si
Dan Friedman
Nitish Joshi
Shi Feng
Danqi Chen
He He
77
48
0
22 May 2023
TaskWeb: Selecting Better Source Tasks for Multi-task NLP
TaskWeb: Selecting Better Source Tasks for Multi-task NLP
Joongwon Kim
Akari Asai
Gabriel Ilharco
Hannaneh Hajishirzi
87
12
0
22 May 2023
RWKV: Reinventing RNNs for the Transformer Era
RWKV: Reinventing RNNs for the Transformer Era
Bo Peng
Eric Alcaide
Quentin G. Anthony
Alon Albalak
Samuel Arcadinho
...
Qihang Zhao
P. Zhou
Qinghua Zhou
Jian Zhu
Rui-Jie Zhu
240
614
0
22 May 2023
BiasAsker: Measuring the Bias in Conversational AI System
BiasAsker: Measuring the Bias in Conversational AI System
Yuxuan Wan
Wenxuan Wang
Pinjia He
Jiazhen Gu
Haonan Bai
Michael Lyu
89
69
0
21 May 2023
Prompting with Pseudo-Code Instructions
Prompting with Pseudo-Code Instructions
Mayank Mishra
Praveen Venkateswaran
Riyaz Ahmad Bhat
V. Rudramurthy
Danish Contractor
Srikanth G. Tamilselvam
105
14
0
19 May 2023
Separating form and meaning: Using self-consistency to quantify task
  understanding across multiple senses
Separating form and meaning: Using self-consistency to quantify task understanding across multiple senses
Xenia Ohmer
Elia Bruni
Dieuwke Hupkes
LRM
102
16
0
19 May 2023
LLM-Pruner: On the Structural Pruning of Large Language Models
LLM-Pruner: On the Structural Pruning of Large Language Models
Xinyin Ma
Gongfan Fang
Xinchao Wang
171
445
0
19 May 2023
Towards More Robust NLP System Evaluation: Handling Missing Scores in
  Benchmarks
Towards More Robust NLP System Evaluation: Handling Missing Scores in Benchmarks
Anas Himmi
Ekhine Irurozki
Nathan Noiry
Stephan Clémençon
Pierre Colombo
193
9
0
17 May 2023
SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative
  Examples
SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative Examples
Deqing Fu
Ameya Godbole
Robin Jia
72
8
0
13 May 2023
Zero-shot Faithful Factual Error Correction
Zero-shot Faithful Factual Error Correction
Kung-Hsiang Huang
Hou Pong Chan
Heng Ji
KELMHILM
104
32
0
13 May 2023
HPE:Answering Complex Questions over Text by Hybrid Question Parsing and
  Execution
HPE:Answering Complex Questions over Text by Hybrid Question Parsing and Execution
Ye Liu
Semih Yavuz
Rui Meng
Dragomir R. Radev
Caiming Xiong
Yingbo Zhou
89
10
0
12 May 2023
Long-Tailed Question Answering in an Open World
Long-Tailed Question Answering in an Open World
Yinpei Dai
Hao Lang
Yinhe Zheng
Fei Huang
Yongbin Li
VLM
76
9
0
11 May 2023
MoT: Memory-of-Thought Enables ChatGPT to Self-Improve
MoT: Memory-of-Thought Enables ChatGPT to Self-Improve
Xiaonan Li
Xipeng Qiu
ReLMKELMLRMAI4MH
89
36
0
09 May 2023
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge
  Distillation
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation
Rongzhi Zhang
Jiaming Shen
Tianqi Liu
Jia-Ling Liu
Michael Bendersky
Marc Najork
Chao Zhang
104
20
0
08 May 2023
Residual Prompt Tuning: Improving Prompt Tuning with Residual
  Reparameterization
Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Anastasia Razdaibiedina
Yuning Mao
Rui Hou
Madian Khabsa
M. Lewis
Jimmy Ba
Amjad Almahairi
VLM
79
51
0
06 May 2023
Neuromodulation Gated Transformer
Neuromodulation Gated Transformer
Kobe Knowles
Joshua Bensemann
Diana Benavides-Prado
Vithya Yogarajan
Michael Witbrock
Gillian Dobbie
Yang Chen
58
0
0
05 May 2023
Cheaply Evaluating Inference Efficiency Metrics for Autoregressive
  Transformer APIs
Cheaply Evaluating Inference Efficiency Metrics for Autoregressive Transformer APIs
Deepak Narayanan
Keshav Santhanam
Peter Henderson
Rishi Bommasani
Tony Lee
Percy Liang
192
3
0
03 May 2023
PTP: Boosting Stability and Performance of Prompt Tuning with
  Perturbation-Based Regularizer
PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer
Lichang Chen
Heng-Chiao Huang
Varun Madhavan
AAML
176
12
0
03 May 2023
Boosting Big Brother: Attacking Search Engines with Encodings
Boosting Big Brother: Attacking Search Engines with Encodings
Nicholas Boucher
Luca Pajola
Ilia Shumailov
Ross J. Anderson
Mauro Conti
SILM
70
10
0
27 Apr 2023
Why Does ChatGPT Fall Short in Providing Truthful Answers?
Why Does ChatGPT Fall Short in Providing Truthful Answers?
Shen Zheng
Jie Huang
Kevin Chen-Chuan Chang
HILMAI4MH
115
56
0
20 Apr 2023
MixPro: Simple yet Effective Data Augmentation for Prompt-based Learning
MixPro: Simple yet Effective Data Augmentation for Prompt-based Learning
Bohan Li
Longxu Dou
Yutai Hou
Yunlong Feng
Honglin Mu
Qingfu Zhu
Qinghua Sun
Wanxiang Che
VLM
74
4
0
19 Apr 2023
Revisiting k-NN for Fine-tuning Pre-trained Language Models
Revisiting k-NN for Fine-tuning Pre-trained Language Models
Lei Li
Jing Chen
Bo Tian
Ning Zhang
56
1
0
18 Apr 2023
In ChatGPT We Trust? Measuring and Characterizing the Reliability of
  ChatGPT
In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT
Xinyue Shen
Zhenpeng Chen
Michael Backes
Yang Zhang
106
59
0
18 Apr 2023
Dialogue Games for Benchmarking Language Understanding: Motivation,
  Taxonomy, Strategy
Dialogue Games for Benchmarking Language Understanding: Motivation, Taxonomy, Strategy
David Schlangen
ELM
83
15
0
14 Apr 2023
Shall We Pretrain Autoregressive Language Models with Retrieval? A
  Comprehensive Study
Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study
Wei Ping
Ming-Yu Liu
Peng Xu
Lawrence C. McAfee
Zihan Liu
...
Oleksii Kuchaiev
Yue Liu
Chaowei Xiao
Anima Anandkumar
Bryan Catanzaro
RALM
98
59
0
13 Apr 2023
AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models
AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models
Wanjun Zhong
Ruixiang Cui
Yiduo Guo
Yaobo Liang
Shuai Lu
Yanlin Wang
Amin Saied
Weizhu Chen
Nan Duan
ALMELM
135
550
0
13 Apr 2023
Conditional Adapters: Parameter-efficient Transfer Learning with Fast
  Inference
Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference
Tao Lei
Junwen Bai
Siddhartha Brahma
Joshua Ainslie
Kenton Lee
...
Vincent Zhao
Yuexin Wu
Yue Liu
Yu Zhang
Ming-Wei Chang
BDLAI4CE
104
63
0
11 Apr 2023
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of
  Large Language Models
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models
Zhiqiang Hu
Lei Wang
Yihuai Lan
Wanyu Xu
Ee-Peng Lim
Lidong Bing
Xing Xu
Soujanya Poria
Roy Ka-wei Lee
ALM
158
273
0
04 Apr 2023
RPTQ: Reorder-based Post-training Quantization for Large Language Models
RPTQ: Reorder-based Post-training Quantization for Large Language Models
Zhihang Yuan
Lin Niu
Jia-Wen Liu
Wenyu Liu
Xinggang Wang
Yuzhang Shang
Guangyu Sun
Qiang Wu
Jiaxiang Wu
Bingzhe Wu
MQ
151
89
0
03 Apr 2023
BloombergGPT: A Large Language Model for Finance
BloombergGPT: A Large Language Model for Finance
Shijie Wu
Ozan Irsoy
Steven Lu
Vadim Dabravolski
Mark Dredze
Sebastian Gehrmann
P. Kambadur
David S. Rosenberg
Gideon Mann
AIFin
246
853
0
30 Mar 2023
AnnoLLM: Making Large Language Models to Be Better Crowdsourced
  Annotators
AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators
Xingwei He
Zheng-Wen Lin
Yeyun Gong
Alex Jin
Hang Zhang
Chen Lin
Jian Jiao
Siu-Ming Yiu
Nan Duan
Weizhu Chen
119
201
0
29 Mar 2023
JaCoText: A Pretrained Model for Java Code-Text Generation
JaCoText: A Pretrained Model for Java Code-Text Generation
Jessica Nayeli López Espejel
Mahaman Sanoussi Yahaya Alassan
Walid Dahhane
E. Ettifouri
56
4
0
22 Mar 2023
Self-supervised Meta-Prompt Learning with Meta-Gradient Regularization
  for Few-shot Generalization
Self-supervised Meta-Prompt Learning with Meta-Gradient Regularization for Few-shot Generalization
Kaihang Pan
Juncheng Billy Li
Hongye Song
Jun Lin
Xiaozhong Liu
Siliang Tang
OffRL
104
13
0
22 Mar 2023
Context-faithful Prompting for Large Language Models
Context-faithful Prompting for Large Language Models
Wenxuan Zhou
Sheng Zhang
Hoifung Poon
Muhao Chen
KELM
61
64
0
20 Mar 2023
Trained on 100 million words and still in shape: BERT meets British
  National Corpus
Trained on 100 million words and still in shape: BERT meets British National Corpus
David Samuel
Andrey Kutuzov
Lilja Øvrelid
Erik Velldal
101
32
0
17 Mar 2023
UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation
UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation
Daixuan Cheng
Shaohan Huang
Junyu Bi
Yu-Wei Zhan
Jianfeng Liu
Yujing Wang
Hao Sun
Furu Wei
Denvy Deng
Qi Zhang
RALMLRM
86
69
0
15 Mar 2023
Model-tuning Via Prompts Makes NLP Models Adversarially Robust
Model-tuning Via Prompts Makes NLP Models Adversarially Robust
Mrigank Raman
Pratyush Maini
J. Zico Kolter
Zachary Chase Lipton
Danish Pruthi
AAML
71
17
0
13 Mar 2023
Dynamic Prompting: A Unified Framework for Prompt Tuning
Dynamic Prompting: A Unified Framework for Prompt Tuning
Xianjun Yang
Wei Cheng
Xujiang Zhao
Wenchao Yu
Linda R. Petzold
Haifeng Chen
VLM
115
16
0
06 Mar 2023
Previous
123...171819...212223
Next