ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10044
  4. Cited By
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

24 May 2019
Christopher Clark
Kenton Lee
Ming-Wei Chang
Tom Kwiatkowski
Michael Collins
Kristina Toutanova
ArXiv (abs)PDFHTML

Papers citing "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions"

50 / 1,143 papers shown
Title
Make Your Decision Convincing! A Unified Two-Stage Framework:
  Self-Attribution and Decision-Making
Make Your Decision Convincing! A Unified Two-Stage Framework: Self-Attribution and Decision-Making
Yanrui Du
Sendong Zhao
Hao Wang
Yuhan Chen
Rui Bai
Zewen Qiang
Muzhen Cai
Bing Qin
54
0
0
20 Oct 2023
Interpreting Indirect Answers to Yes-No Questions in Multiple Languages
Interpreting Indirect Answers to Yes-No Questions in Multiple Languages
Zijie Wang
Md Mosharaf Hossain
Shivam Mathur
Terry Cruz Melo
Kadir Bulut Ozler
...
Jacob Quintero
MohammadHossein Rezaei
Shreya Nupur Shakya
Md Nayem Uddin
Eduardo Blanco
65
1
0
20 Oct 2023
Attack Prompt Generation for Red Teaming and Defending Large Language
  Models
Attack Prompt Generation for Red Teaming and Defending Large Language Models
Boyi Deng
Wenjie Wang
Fuli Feng
Yang Deng
Qifan Wang
Xiangnan He
AAML
76
57
0
19 Oct 2023
Eliminating Reasoning via Inferring with Planning: A New Framework to
  Guide LLMs' Non-linear Thinking
Eliminating Reasoning via Inferring with Planning: A New Framework to Guide LLMs' Non-linear Thinking
Yongqi Tong
Yifan Wang
Dawei Li
Sizhe Wang
Zi Lin
Simeng Han
Jingbo Shang
LRM
52
17
0
18 Oct 2023
Measuring Pointwise $\mathcal{V}$-Usable Information In-Context-ly
Measuring Pointwise V\mathcal{V}V-Usable Information In-Context-ly
Sheng Lu
Shan Chen
Yingya Li
Danielle Bitterman
G. Savova
Iryna Gurevych
52
0
0
18 Oct 2023
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
Hao Zhao
Jie Fu
Zhaofeng He
155
6
0
18 Oct 2023
IDEAL: Influence-Driven Selective Annotations Empower In-Context
  Learners in Large Language Models
IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models
Shaokun Zhang
Xiaobo Xia
Zhaoqing Wang
Ling-Hao Chen
Jiale Liu
Qingyun Wu
Tongliang Liu
83
22
0
16 Oct 2023
In-context Pretraining: Language Modeling Beyond Document Boundaries
In-context Pretraining: Language Modeling Beyond Document Boundaries
Weijia Shi
Sewon Min
Maria Lomeli
Chunting Zhou
Margaret Li
...
Victoria Lin
Noah A. Smith
Luke Zettlemoyer
Scott Yih
Mike Lewis
LRMRALMSyDa
135
56
0
16 Oct 2023
Diversifying the Mixture-of-Experts Representation for Language Models
  with Orthogonal Optimizer
Diversifying the Mixture-of-Experts Representation for Language Models with Orthogonal Optimizer
Boan Liu
Liang Ding
Li Shen
Keqin Peng
Yu Cao
Dazhao Cheng
Dacheng Tao
MoE
80
9
0
15 Oct 2023
KGQuiz: Evaluating the Generalization of Encoded Knowledge in Large
  Language Models
KGQuiz: Evaluating the Generalization of Encoded Knowledge in Large Language Models
Yuyang Bai
Shangbin Feng
Vidhisha Balachandran
Zhaoxuan Tan
Shiqi Lou
Tianxing He
Yulia Tsvetkov
ELM
95
3
0
15 Oct 2023
DPZero: Private Fine-Tuning of Language Models without Backpropagation
DPZero: Private Fine-Tuning of Language Models without Backpropagation
Liang Zhang
Bingcong Li
K. K. Thekumparampil
Sewoong Oh
Niao He
92
15
0
14 Oct 2023
Towards Informative Few-Shot Prompt with Maximum Information Gain for
  In-Context Learning
Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning
Hongfu Liu
Ye Wang
80
9
0
13 Oct 2023
GLoRE: Evaluating Logical Reasoning of Large Language Models
GLoRE: Evaluating Logical Reasoning of Large Language Models
Hanmeng Liu
Zhiyang Teng
Ruoxi Ning
Jian Liu
Qiji Zhou
Yuexin Zhang
Yue Zhang
ReLMELMLRM
164
8
0
13 Oct 2023
Tokenizer Choice For LLM Training: Negligible or Crucial?
Tokenizer Choice For LLM Training: Negligible or Crucial?
Mehdi Ali
Michael Fromm
Klaudia Thellmann
Richard Rutmann
Max Lübbering
...
Malte Ostendorff
Samuel Weinbach
R. Sifa
Stefan Kesselheim
Nicolas Flores-Herr
114
61
0
12 Oct 2023
Faithfulness Measurable Masked Language Models
Faithfulness Measurable Masked Language Models
Andreas Madsen
Siva Reddy
Sarath Chandar
81
3
0
11 Oct 2023
Mistral 7B
Mistral 7B
Albert Q. Jiang
Alexandre Sablayrolles
A. Mensch
Chris Bamford
Devendra Singh Chaplot
...
Teven Le Scao
Thibaut Lavril
Thomas Wang
Timothée Lacroix
William El Sayed
MoELRM
154
2,260
0
10 Oct 2023
TRACE: A Comprehensive Benchmark for Continual Learning in Large
  Language Models
TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models
Xiao Wang
Yuan Zhang
Tianze Chen
Songyang Gao
Senjie Jin
...
Rui Zheng
Yicheng Zou
Tao Gui
Qi Zhang
Xuanjing Huang
ALMLRMCLL
91
23
0
10 Oct 2023
InterroLang: Exploring NLP Models and Datasets through Dialogue-based
  Explanations
InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations
Nils Feldhus
Qianli Wang
Tatiana Anikina
Sahil Chopra
Cennet Oguz
Sebastian Möller
95
14
0
09 Oct 2023
Compresso: Structured Pruning with Collaborative Prompting Learns
  Compact Large Language Models
Compresso: Structured Pruning with Collaborative Prompting Learns Compact Large Language Models
Song Guo
Jiahang Xu
Li Zhang
Mao Yang
87
15
0
08 Oct 2023
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity
Lu Yin
You Wu
Zhenyu Zhang
Cheng-Yu Hsieh
Yaqing Wang
...
Mykola Pechenizkiy
Yi Liang
Michael Bendersky
Zhangyang Wang
Shiwei Liu
137
102
0
08 Oct 2023
Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM
Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM
Luoming Zhang
Wen Fei
Weijia Wu
Yefei He
Zhenyu Lou
Hong Zhou
MQ
64
5
0
07 Oct 2023
Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models
Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models
Xianjun Yang
Xiao Wang
Qi Zhang
Linda R. Petzold
William Y. Wang
Xun Zhao
Dahua Lin
83
190
0
04 Oct 2023
CITING: Large Language Models Create Curriculum for Instruction Tuning
CITING: Large Language Models Create Curriculum for Instruction Tuning
Tao Feng
Zifeng Wang
Jimeng Sun
ALM
89
15
0
04 Oct 2023
Who's Harry Potter? Approximate Unlearning in LLMs
Who's Harry Potter? Approximate Unlearning in LLMs
Ronen Eldan
M. Russinovich
MUMoMe
169
217
0
03 Oct 2023
Fool Your (Vision and) Language Model With Embarrassingly Simple
  Permutations
Fool Your (Vision and) Language Model With Embarrassingly Simple Permutations
Yongshuo Zong
Tingyang Yu
Ruchika Chavhan
Bingchen Zhao
Timothy M. Hospedales
MLLMAAMLLRM
76
20
0
02 Oct 2023
RA-DIT: Retrieval-Augmented Dual Instruction Tuning
RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Xi Lin
Xilun Chen
Mingda Chen
Weijia Shi
Maria Lomeli
...
Jacob Kahn
Gergely Szilvasy
Mike Lewis
Luke Zettlemoyer
Scott Yih
RALM
144
157
0
02 Oct 2023
Synthetic Data Generation in Low-Resource Settings via Fine-Tuning of
  Large Language Models
Synthetic Data Generation in Low-Resource Settings via Fine-Tuning of Large Language Models
Jean Kaddour
Qi Liu
SyDa
54
2
0
02 Oct 2023
Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model
  Collaboration
Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration
Qiushi Sun
Zhangyue Yin
Xiang Li
Zhiyong Wu
Xipeng Qiu
Lingpeng Kong
LRMLLMAG
85
49
0
30 Sep 2023
Network Memory Footprint Compression Through Jointly Learnable Codebooks
  and Mappings
Network Memory Footprint Compression Through Jointly Learnable Codebooks and Mappings
Vittorio Giammarino
Arnaud Dapogny
Kévin Bailly
MQ
65
1
0
29 Sep 2023
PB-LLM: Partially Binarized Large Language Models
PB-LLM: Partially Binarized Large Language Models
Yuzhang Shang
Zhihang Yuan
Qiang Wu
Zhen Dong
MQ
95
48
0
29 Sep 2023
Batch Calibration: Rethinking Calibration for In-Context Learning and
  Prompt Engineering
Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering
Han Zhou
Xingchen Wan
Lev Proleev
Diana Mincu
Jilin Chen
Katherine A. Heller
Subhrajit Roy
UQLM
85
61
0
29 Sep 2023
How many words does ChatGPT know? The answer is ChatWords
How many words does ChatGPT know? The answer is ChatWords
Gonzalo Martínez
Javier Conde
Pedro Reviriego
Elena Merino-Gómez
José Alberto Hernández
Fabrizio Lombardi
AI4MH
19
5
0
28 Sep 2023
Qwen Technical Report
Qwen Technical Report
Jinze Bai
Shuai Bai
Yunfei Chu
Zeyu Cui
Kai Dang
...
Zhenru Zhang
Chang Zhou
Jingren Zhou
Xiaohuan Zhou
Tianhang Zhu
OSLM
324
1,922
0
28 Sep 2023
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Yuhui Xu
Lingxi Xie
Xiaotao Gu
Xin Chen
Heng Chang
Hengheng Zhang
Zhensu Chen
Xiaopeng Zhang
Qi Tian
MQ
75
108
0
26 Sep 2023
Does the "most sinfully decadent cake ever" taste good? Answering Yes/No
  Questions from Figurative Contexts
Does the "most sinfully decadent cake ever" taste good? Answering Yes/No Questions from Figurative Contexts
Geetanjali Rakshit
Jeffrey Flanigan
ELM
62
1
0
24 Sep 2023
Foundation Metrics for Evaluating Effectiveness of Healthcare
  Conversations Powered by Generative AI
Foundation Metrics for Evaluating Effectiveness of Healthcare Conversations Powered by Generative AI
Mahyar Abbasian
Elahe Khatibi
Iman Azimi
David Oniani
Zahra Shakeri Hossein Abad
...
Bryant Lin
Olivier Gevaert
Li-Jia Li
Ramesh C. Jain
Amir M. Rahmani
LM&MAELMAI4MH
139
76
0
21 Sep 2023
Knowledge Sanitization of Large Language Models
Knowledge Sanitization of Large Language Models
Yoichi Ishibashi
Hidetoshi Shimodaira
KELM
129
25
0
21 Sep 2023
BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model
BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model
Nolan Dey
Daria Soboleva
Faisal Al-Khateeb
Bowen Yang
Ribhu Pathria
...
Robert Myers
Jacob Robert Steeves
Natalia Vassilieva
Marvin Tom
Joel Hestness
MoE
87
16
0
20 Sep 2023
DreamLLM: Synergistic Multimodal Comprehension and Creation
DreamLLM: Synergistic Multimodal Comprehension and Creation
Runpei Dong
Chunrui Han
Yuang Peng
Zekun Qi
Zheng Ge
...
Hao-Ran Wei
Xiangwen Kong
Xiangyu Zhang
Kaisheng Ma
Li Yi
MLLM
106
199
0
20 Sep 2023
Estimating Contamination via Perplexity: Quantifying Memorisation in
  Language Model Evaluation
Estimating Contamination via Perplexity: Quantifying Memorisation in Language Model Evaluation
Yucheng Li
91
35
0
19 Sep 2023
PoSE: Efficient Context Window Extension of LLMs via Positional
  Skip-wise Training
PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training
Dawei Zhu
Nan Yang
Liang Wang
Yifan Song
Wenhao Wu
Furu Wei
Sujian Li
159
89
0
19 Sep 2023
Adapting Large Language Models via Reading Comprehension
Adapting Large Language Models via Reading Comprehension
Daixuan Cheng
Shaohan Huang
Furu Wei
CLLSyDaAI4CE
86
64
0
18 Sep 2023
Contrastive Decoding Improves Reasoning in Large Language Models
Contrastive Decoding Improves Reasoning in Large Language Models
Sean O'Brien
Mike Lewis
SyDaLRMReLM
102
39
0
17 Sep 2023
Headless Language Models: Learning without Predicting with Contrastive
  Weight Tying
Headless Language Models: Learning without Predicting with Contrastive Weight Tying
Nathan Godey
Eric Villemonte de la Clergerie
Benoît Sagot
55
3
0
15 Sep 2023
Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language
  Models that Follow Instructions
Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions
Federico Bianchi
Mirac Suzgun
Giuseppe Attanasio
Paul Röttger
Dan Jurafsky
Tatsunori Hashimoto
James Zou
ALMLM&MALRM
96
219
0
14 Sep 2023
Pretraining on the Test Set Is All You Need
Pretraining on the Test Set Is All You Need
Rylan Schaeffer
118
30
0
13 Sep 2023
Optimize Weight Rounding via Signed Gradient Descent for the
  Quantization of LLMs
Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs
Wenhua Cheng
Weiwei Zhang
Haihao Shen
Yiyang Cai
Xin He
Kaokao Lv
Yi. Liu
MQ
158
25
0
11 Sep 2023
NeCo@ALQAC 2023: Legal Domain Knowledge Acquisition for Low-Resource
  Languages through Data Enrichment
NeCo@ALQAC 2023: Legal Domain Knowledge Acquisition for Low-Resource Languages through Data Enrichment
Hai-Long Nguyen
Dieu-Quynh Nguyen
Hoang-Trung Nguyen
Thu-Trang Pham
Huu-Dong Nguyen
Thach-Anh Nguyen
Thi-Hai-Yen Vuong
Nguyen Ha Thanh
AILaw
45
3
0
11 Sep 2023
Textbooks Are All You Need II: phi-1.5 technical report
Textbooks Are All You Need II: phi-1.5 technical report
Yuan-Fang Li
Sébastien Bubeck
Ronen Eldan
Allison Del Giorno
Suriya Gunasekar
Yin Tat Lee
ALMLRM
171
482
0
11 Sep 2023
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
Zhengxiang Shi
Aldo Lipani
VLM
124
34
0
11 Sep 2023
Previous
123...151617...212223
Next