ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10044
  4. Cited By
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

24 May 2019
Christopher Clark
Kenton Lee
Ming-Wei Chang
Tom Kwiatkowski
Michael Collins
Kristina Toutanova
ArXiv (abs)PDFHTML

Papers citing "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions"

50 / 1,143 papers shown
Title
Rethinking Generative Large Language Model Evaluation for Semantic
  Comprehension
Rethinking Generative Large Language Model Evaluation for Semantic Comprehension
Fangyun Wei
Xi Chen
Linzi Luo
ELMALMLRM
63
8
0
12 Mar 2024
Towards Faithful Explanations: Boosting Rationalization with Shortcuts
  Discovery
Towards Faithful Explanations: Boosting Rationalization with Shortcuts Discovery
Linan Yue
Qi Liu
Yichao Du
Li Wang
Weibo Gao
Yanqing An
82
5
0
12 Mar 2024
FrameQuant: Flexible Low-Bit Quantization for Transformers
FrameQuant: Flexible Low-Bit Quantization for Transformers
Harshavardhan Adepu
Zhanpeng Zeng
Li Zhang
Vikas Singh
MQ
55
8
0
10 Mar 2024
Concept-aware Data Construction Improves In-context Learning of Language
  Models
Concept-aware Data Construction Improves In-context Learning of Language Models
Michal Štefánik
Marek Kadlcík
Petr Sojka
92
1
0
08 Mar 2024
Yi: Open Foundation Models by 01.AI
Yi: Open Foundation Models by 01.AI
01. AI
Alex Young
01.AI Alex Young
Bei Chen
Chao Li
...
Yue Wang
Yuxuan Cai
Zhenyu Gu
Zhiyuan Liu
Zonghong Dai
OSLMLRM
313
576
0
07 Mar 2024
Many-Objective Multi-Solution Transport
Many-Objective Multi-Solution Transport
Ziyue Li
Tian Li
Virginia Smith
Jeff Bilmes
Dinesh Manocha
85
3
0
06 Mar 2024
ShortGPT: Layers in Large Language Models are More Redundant Than You
  Expect
ShortGPT: Layers in Large Language Models are More Redundant Than You Expect
Xin Men
Mingyu Xu
Qingyu Zhang
Bingning Wang
Hongyu Lin
Yaojie Lu
Xianpei Han
Weipeng Chen
117
141
0
06 Mar 2024
SPUQ: Perturbation-Based Uncertainty Quantification for Large Language
  Models
SPUQ: Perturbation-Based Uncertainty Quantification for Large Language Models
Xiang Gao
Jiaxin Zhang
Lalla Mouatadid
Kamalika Das
83
14
0
04 Mar 2024
IntactKV: Improving Large Language Model Quantization by Keeping Pivot
  Tokens Intact
IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact
Ruikang Liu
Haoli Bai
Haokun Lin
Yuening Li
Han Gao
Zheng-Jun Xu
Lu Hou
Jun Yao
Chun Yuan
MQ
84
32
0
02 Mar 2024
STAR: Constraint LoRA with Dynamic Active Learning for Data-Efficient
  Fine-Tuning of Large Language Models
STAR: Constraint LoRA with Dynamic Active Learning for Data-Efficient Fine-Tuning of Large Language Models
Linhai Zhang
Jialong Wu
Deyu Zhou
Guoqiang Xu
98
5
0
02 Mar 2024
ATP: Enabling Fast LLM Serving via Attention on Top Principal Keys
ATP: Enabling Fast LLM Serving via Attention on Top Principal Keys
Yue Niu
Saurav Prakash
Salman Avestimehr
51
1
0
01 Mar 2024
Functional Benchmarks for Robust Evaluation of Reasoning Performance,
  and the Reasoning Gap
Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap
Saurabh Srivastava
B. AnnaroseM
V. AntoP
Shashank Menon
Ajay Sukumar
T. AdwaithSamod
Alan Philipose
Stevin Prince
Sooraj Thomas
ELMReLMLRM
79
56
0
29 Feb 2024
When does word order matter and when doesn't it?
When does word order matter and when doesn't it?
Xuanda Chen
T. O'Donnell
Siva Reddy
92
1
0
29 Feb 2024
Learning to Compress Prompt in Natural Language Formats
Learning to Compress Prompt in Natural Language Formats
Yu-Neng Chuang
Tianwei Xing
Chia-Yuan Chang
Zirui Liu
Xun Chen
Helen Zhou
86
20
0
28 Feb 2024
FlattenQuant: Breaking Through the Inference Compute-bound for Large
  Language Models with Per-tensor Quantization
FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization
Yi Zhang
Fei Yang
Shuang Peng
Fangyu Wang
Aimin Pan
MQ
71
2
0
28 Feb 2024
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Shuming Ma
Hongyu Wang
Lingxiao Ma
Lei Wang
Wenhui Wang
Shaohan Huang
Lifeng Dong
Ruiping Wang
Jilong Xue
Furu Wei
MQ
95
234
0
27 Feb 2024
Massive Activations in Large Language Models
Massive Activations in Large Language Models
Mingjie Sun
Xinlei Chen
J. Zico Kolter
Zhuang Liu
126
81
0
27 Feb 2024
DenseMamba: State Space Models with Dense Hidden Connection for
  Efficient Large Language Models
DenseMamba: State Space Models with Dense Hidden Connection for Efficient Large Language Models
Wei He
Kai Han
Yehui Tang
Chengcheng Wang
Yujie Yang
Tianyu Guo
Yunhe Wang
Mamba
121
26
0
26 Feb 2024
SportQA: A Benchmark for Sports Understanding in Large Language Models
SportQA: A Benchmark for Sports Understanding in Large Language Models
Haotian Xia
Zhengbang Yang
Yuqing Wang
Rhys Tracy
Yun Zhao
Dongdong Huang
Zezhi Chen
Yan Zhu
Yuan-fang Wang
Weining Shen
78
10
0
24 Feb 2024
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM
  Fine-Tuning
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning
Yong Liu
Zirui Zhu
Chaoyu Gong
Minhao Cheng
Cho-Jui Hsieh
Yang You
MoE
81
23
0
24 Feb 2024
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables
  Parameter-Efficient Transfer Learning
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning
Zhisheng Lin
Han Fu
Chenghao Liu
Zhuo Li
Jianling Sun
MoEMoMe
47
6
0
23 Feb 2024
GPTVQ: The Blessing of Dimensionality for LLM Quantization
GPTVQ: The Blessing of Dimensionality for LLM Quantization
M. V. Baalen
Andrey Kuzmin
Ivan Koryakovskiy
Markus Nagel
Peter Couperus
Cédric Bastoul
E. Mahurin
Tijmen Blankevoort
Paul N. Whatmough
MQ
108
35
0
23 Feb 2024
MobileLLM: Optimizing Sub-billion Parameter Language Models for
  On-Device Use Cases
MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases
Zechun Liu
Changsheng Zhao
Forrest N. Iandola
Chen Lai
Yuandong Tian
...
Ernie Chang
Yangyang Shi
Raghuraman Krishnamoorthi
Liangzhen Lai
Vikas Chandra
ALM
137
102
0
22 Feb 2024
Efficient and Effective Vocabulary Expansion Towards Multilingual Large
  Language Models
Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models
Seungduk Kim
Seungtaek Choi
Myeongho Jeong
78
7
0
22 Feb 2024
Towards Unified Task Embeddings Across Multiple Models: Bridging the Gap
  for Prompt-Based Large Language Models and Beyond
Towards Unified Task Embeddings Across Multiple Models: Bridging the Gap for Prompt-Based Large Language Models and Beyond
Xinyu Wang
Hainiu Xu
Lin Gui
Yulan He
MoMeAIFin
112
1
0
22 Feb 2024
Do LLMs Implicitly Determine the Suitable Text Difficulty for Users?
Do LLMs Implicitly Determine the Suitable Text Difficulty for Users?
Seiji Gobara
Hidetaka Kamigaito
Taro Watanabe
79
4
0
22 Feb 2024
On the Tip of the Tongue: Analyzing Conceptual Representation in Large
  Language Models with Reverse-Dictionary Probe
On the Tip of the Tongue: Analyzing Conceptual Representation in Large Language Models with Reverse-Dictionary Probe
Ningyu Xu
Qi Zhang
Menghan Zhang
Peng Qian
Xuanjing Huang
LRM
124
3
0
22 Feb 2024
Take the Bull by the Horns: Hard Sample-Reweighted Continual Training
  Improves LLM Generalization
Take the Bull by the Horns: Hard Sample-Reweighted Continual Training Improves LLM Generalization
Xuxi Chen
Zhendong Wang
Daouda Sow
Junjie Yang
Tianlong Chen
Yingbin Liang
Mingyuan Zhou
Zhangyang Wang
88
7
0
22 Feb 2024
FinGPT-HPC: Efficient Pretraining and Finetuning Large Language Models
  for Financial Applications with High-Performance Computing
FinGPT-HPC: Efficient Pretraining and Finetuning Large Language Models for Financial Applications with High-Performance Computing
Xiao-Yang Liu
Jie Zhang
Guoxuan Wang
Weiqin Tong
Anwar Elwalid
68
4
0
21 Feb 2024
ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity
  within Large Language Models
ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models
Chenyang Song
Xu Han
Zhengyan Zhang
Shengding Hu
Xiyu Shi
...
Chen Chen
Zhiyuan Liu
Guanglin Li
Tao Yang
Maosong Sun
157
32
0
21 Feb 2024
MoELoRA: Contrastive Learning Guided Mixture of Experts on
  Parameter-Efficient Fine-Tuning for Large Language Models
MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models
Tongxu Luo
Jiahe Lei
Fangyu Lei
Weihao Liu
Shizhu He
Jun Zhao
Kang Liu
MoEALM
84
29
0
20 Feb 2024
HyperMoE: Towards Better Mixture of Experts via Transferring Among
  Experts
HyperMoE: Towards Better Mixture of Experts via Transferring Among Experts
Hao Zhao
Zihan Qiu
Huijia Wu
Zili Wang
Zhaofeng He
Jie Fu
MoE
124
13
0
20 Feb 2024
Comparing Specialised Small and General Large Language Models on Text Classification: 100 Labelled Samples to Achieve Break-Even Performance
Comparing Specialised Small and General Large Language Models on Text Classification: 100 Labelled Samples to Achieve Break-Even Performance
Branislav Pecher
Ivan Srba
Maria Bielikova
ALM
100
8
0
20 Feb 2024
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning
Zhihao Wen
Jie Zhang
Yuan Fang
MoE
75
3
0
19 Feb 2024
Head-wise Shareable Attention for Large Language Models
Head-wise Shareable Attention for Large Language Models
Zouying Cao
Yifei Yang
Hai Zhao
61
4
0
19 Feb 2024
Generation Meets Verification: Accelerating Large Language Model
  Inference with Smart Parallel Auto-Correct Decoding
Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding
Hanling Yi
Feng-Huei Lin
Hongbin Li
Peiyang Ning
Xiaotian Yu
Rong Xiao
LRM
80
13
0
19 Feb 2024
BESA: Pruning Large Language Models with Blockwise Parameter-Efficient
  Sparsity Allocation
BESA: Pruning Large Language Models with Blockwise Parameter-Efficient Sparsity Allocation
Peng Xu
Wenqi Shao
Mengzhao Chen
Shitao Tang
Kai-Chuang Zhang
Peng Gao
Fengwei An
Yu Qiao
Ping Luo
MoE
116
32
0
18 Feb 2024
Benchmark Self-Evolving: A Multi-Agent Framework for Dynamic LLM
  Evaluation
Benchmark Self-Evolving: A Multi-Agent Framework for Dynamic LLM Evaluation
Siyuan Wang
Zhuohan Long
Zhihao Fan
Zhongyu Wei
Xuanjing Huang
LLMAG
99
38
0
18 Feb 2024
OneBit: Towards Extremely Low-bit Large Language Models
OneBit: Towards Extremely Low-bit Large Language Models
Yuzhuang Xu
Xu Han
Zonghan Yang
Shuo Wang
Qingfu Zhu
Zhiyuan Liu
Weidong Liu
Wanxiang Che
MQ
117
46
0
17 Feb 2024
LaCo: Large Language Model Pruning via Layer Collapse
LaCo: Large Language Model Pruning via Layer Collapse
Yifei Yang
Zouying Cao
Hai Zhao
90
63
0
17 Feb 2024
A Question Answering Based Pipeline for Comprehensive Chinese EHR
  Information Extraction
A Question Answering Based Pipeline for Comprehensive Chinese EHR Information Extraction
Huaiyuan Ying
Sheng Yu
MedIm
49
0
0
17 Feb 2024
Contrastive Instruction Tuning
Contrastive Instruction Tuning
Tianyi Yan
Fei Wang
James Y. Huang
Wenxuan Zhou
Fan Yin
Aram Galstyan
Wenpeng Yin
Muhao Chen
ALM
58
6
0
17 Feb 2024
Language Models as Science Tutors
Language Models as Science Tutors
Alexis Chevalier
Jiayi Geng
Alexander Wettig
Howard Chen
Sebastian Mizera
...
Jiatong Yu
Jun-Jie Zhu
Z. Ren
Sanjeev Arora
Danqi Chen
ELM
72
13
0
16 Feb 2024
Both Matter: Enhancing the Emotional Intelligence of Large Language
  Models without Compromising the General Intelligence
Both Matter: Enhancing the Emotional Intelligence of Large Language Models without Compromising the General Intelligence
Weixiang Zhao
Zhuojun Li
Shilong Wang
Yang Wang
Yulin Hu
Yanyan Zhao
Chen Wei
Bing Qin
108
5
0
15 Feb 2024
NutePrune: Efficient Progressive Pruning with Numerous Teachers for
  Large Language Models
NutePrune: Efficient Progressive Pruning with Numerous Teachers for Large Language Models
Shengrui Li
Junzhe Chen
Xueting Han
Jing Bai
81
6
0
15 Feb 2024
How to Train Data-Efficient LLMs
How to Train Data-Efficient LLMs
Noveen Sachdeva
Benjamin Coleman
Wang-Cheng Kang
Jianmo Ni
Lichan Hong
Ed H. Chi
James Caverlee
Julian McAuley
D. Cheng
93
64
0
15 Feb 2024
Get More with LESS: Synthesizing Recurrence with KV Cache Compression
  for Efficient LLM Inference
Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference
Harry Dong
Xinyu Yang
Zhenyu Zhang
Zhangyang Wang
Yuejie Chi
Beidi Chen
78
54
0
14 Feb 2024
HiRE: High Recall Approximate Top-$k$ Estimation for Efficient LLM
  Inference
HiRE: High Recall Approximate Top-kkk Estimation for Efficient LLM Inference
Yashas Samaga
Varun Yerram
Chong You
Srinadh Bhojanapalli
Sanjiv Kumar
Prateek Jain
Praneeth Netrapalli
79
5
0
14 Feb 2024
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Haeju Lee
Minchan Jeong
SeYoung Yun
Kee-Eung Kim
AAMLVPVLM
88
3
0
13 Feb 2024
Plausible Extractive Rationalization through Semi-Supervised Entailment
  Signal
Plausible Extractive Rationalization through Semi-Supervised Entailment Signal
Yeo Wei Jie
Ranjan Satapathy
Min Zhang
62
6
0
13 Feb 2024
Previous
123...121314...212223
Next