ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10044
  4. Cited By
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

24 May 2019
Christopher Clark
Kenton Lee
Ming-Wei Chang
Tom Kwiatkowski
Michael Collins
Kristina Toutanova
ArXiv (abs)PDFHTML

Papers citing "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions"

50 / 1,143 papers shown
Title
Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large
  Language Models without Training through Attention Calibration
Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Zhongzhi Yu
Zheng Wang
Yonggan Fu
Huihong Shi
Khalid Shaikh
Yingyan Celine Lin
118
25
0
22 Jun 2024
RankAdaptor: Hierarchical Dynamic Low-Rank Adaptation for Structural
  Pruned LLMs
RankAdaptor: Hierarchical Dynamic Low-Rank Adaptation for Structural Pruned LLMs
Changhai Zhou
Shijie Han
Shiyang Zhang
Shichao Weng
Zekai Liu
Cheng Jin
77
1
0
22 Jun 2024
Sports Intelligence: Assessing the Sports Understanding Capabilities of
  Language Models through Question Answering from Text to Video
Sports Intelligence: Assessing the Sports Understanding Capabilities of Language Models through Question Answering from Text to Video
Zhengbang Yang
Haotian Xia
Jingxi Li
Zezhi Chen
Zhuangdi Zhu
Weining Shen
ELMLRM
91
2
0
21 Jun 2024
Rethinking Pruning Large Language Models: Benefits and Pitfalls of
  Reconstruction Error Minimization
Rethinking Pruning Large Language Models: Benefits and Pitfalls of Reconstruction Error Minimization
Sungbin Shin
Wonpyo Park
Jaeho Lee
Namhoon Lee
67
2
0
21 Jun 2024
Efficient Continual Pre-training by Mitigating the Stability Gap
Efficient Continual Pre-training by Mitigating the Stability Gap
Yiduo Guo
Jie Fu
Huishuai Zhang
Dongyan Zhao
Songlin Yang
79
15
0
21 Jun 2024
CEBench: A Benchmarking Toolkit for the Cost-Effectiveness of LLM
  Pipelines
CEBench: A Benchmarking Toolkit for the Cost-Effectiveness of LLM Pipelines
Wenbo Sun
Jiaqi Wang
Qiming Guo
Ziyu Li
Wenlu Wang
Rihan Hai
63
10
0
20 Jun 2024
Instruction Pre-Training: Language Models are Supervised Multitask
  Learners
Instruction Pre-Training: Language Models are Supervised Multitask Learners
Daixuan Cheng
Yuxian Gu
Shaohan Huang
Junyu Bi
Minlie Huang
Furu Wei
SyDa
137
24
0
20 Jun 2024
Large Language Models are Skeptics: False Negative Problem of
  Input-conflicting Hallucination
Large Language Models are Skeptics: False Negative Problem of Input-conflicting Hallucination
Jongyoon Song
Sangwon Yu
Sungroh Yoon
HILM
65
4
0
20 Jun 2024
Improving Visual Commonsense in Language Models via Multiple Image
  Generation
Improving Visual Commonsense in Language Models via Multiple Image Generation
Guy Yariv
Idan Schwartz
Yossi Adi
Sagie Benaim
VLMLRM
48
0
0
19 Jun 2024
BiLD: Bi-directional Logits Difference Loss for Large Language Model
  Distillation
BiLD: Bi-directional Logits Difference Loss for Large Language Model Distillation
Minchong Li
Feng Zhou
Xiaohui Song
56
3
0
19 Jun 2024
Towards Robust Evaluation: A Comprehensive Taxonomy of Datasets and
  Metrics for Open Domain Question Answering in the Era of Large Language
  Models
Towards Robust Evaluation: A Comprehensive Taxonomy of Datasets and Metrics for Open Domain Question Answering in the Era of Large Language Models
Akchay Srivastava
Atif Memon
ELM
85
1
0
19 Jun 2024
When Parts are Greater Than Sums: Individual LLM Components Can
  Outperform Full Models
When Parts are Greater Than Sums: Individual LLM Components Can Outperform Full Models
Ting-Yun Chang
Jesse Thomason
Robin Jia
110
5
0
19 Jun 2024
BoA: Attention-aware Post-training Quantization without Backpropagation
BoA: Attention-aware Post-training Quantization without Backpropagation
Junhan Kim
Ho-Young Kim
Eulrang Cho
Chungman Lee
Joonyoung Kim
Yongkweon Jeon
MQ
119
0
0
19 Jun 2024
LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional
  Adaptation
LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation
Seyedarmin Azizi
Souvik Kundu
Massoud Pedram
54
9
0
18 Jun 2024
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for
  Large Language Models
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models
Devichand Budagam
Sankalp KJ
Ashutosh Kumar
Vinija Jain
Aman Chadha
80
1
0
18 Jun 2024
PDSS: A Privacy-Preserving Framework for Step-by-Step Distillation of
  Large Language Models
PDSS: A Privacy-Preserving Framework for Step-by-Step Distillation of Large Language Models
Tao Fan
Yan Kang
Weijing Chen
Hanlin Gu
Yuanfeng Song
Lixin Fan
Kai Chen
Qiang Yang
55
0
0
18 Jun 2024
Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for
  Large Language Models
Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
Dongwon Jo
Taesu Kim
Yulhwa Kim
Jae-Joon Kim
124
5
0
18 Jun 2024
InternalInspector $I^2$: Robust Confidence Estimation in LLMs through
  Internal States
InternalInspector I2I^2I2: Robust Confidence Estimation in LLMs through Internal States
Mohammad Beigi
Ying Shen
Runing Yang
Zihao Lin
Qifan Wang
Ankith Mohan
Jianfeng He
Ming Jin
Chang-Tien Lu
Lifu Huang
HILM
78
10
0
17 Jun 2024
LiLiuM: eBay's Large Language Models for e-commerce
LiLiuM: eBay's Large Language Models for e-commerce
Christian Herold
Michael Kozielski
Leonid Ekimov
Pavel Petrushkov
P. Vandenbussche
Shahram Khadivi
90
3
0
17 Jun 2024
Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs
Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs
Yi Fang
Moxin Li
Wenjie Wang
Hui Lin
Fuli Feng
LRM
121
8
0
17 Jun 2024
CodeGemma: Open Code Models Based on Gemma
CodeGemma: Open Code Models Based on Gemma
CodeGemma Team
Heri Zhao
Jeffrey Hui
Joshua Howland
Nam Nguyen
...
Ale Jakse Hartman
Bin Ni
Kathy Korevec
Kelly Schaefer
Scott Huffman
VLM
113
129
0
17 Jun 2024
Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts
Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts
Tong Zhu
Daize Dong
Xiaoye Qu
Jiacheng Ruan
Wenliang Chen
Yu Cheng
MoE
100
9
0
17 Jun 2024
RoseLoRA: Row and Column-wise Sparse Low-rank Adaptation of Pre-trained
  Language Model for Knowledge Editing and Fine-tuning
RoseLoRA: Row and Column-wise Sparse Low-rank Adaptation of Pre-trained Language Model for Knowledge Editing and Fine-tuning
Haoyu Wang
Tianci Liu
Ruirui Li
Monica Cheng
Tuo Zhao
Jing Gao
62
11
0
16 Jun 2024
Mixture-of-Subspaces in Low-Rank Adaptation
Mixture-of-Subspaces in Low-Rank Adaptation
Taiqiang Wu
Jiahao Wang
Zhe Zhao
Ngai Wong
142
27
0
16 Jun 2024
CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language
  Model Pre-training
CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training
David Brandfonbrener
Hanlin Zhang
Andreas Kirsch
Jonathan Richard Schwarz
Sham Kakade
108
7
0
15 Jun 2024
GenQA: Generating Millions of Instructions from a Handful of Prompts
GenQA: Generating Millions of Instructions from a Handful of Prompts
Jiuhai Chen
Rifaa Qadri
Yuxin Wen
Neel Jain
John Kirchenbauer
Dinesh Manocha
Tom Goldstein
ALM
154
24
0
14 Jun 2024
ECBD: Evidence-Centered Benchmark Design for NLP
ECBD: Evidence-Centered Benchmark Design for NLP
Yu Lu Liu
Su Lin Blodgett
Jackie Chi Kit Cheung
Q. Vera Liao
Alexandra Olteanu
Ziang Xiao
91
12
0
13 Jun 2024
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning
Hanqing Wang
Zeguan Xiao
Shuo Wang
Guanhua Chen
Guanhua Chen
104
27
0
13 Jun 2024
Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning
  Framework from Logit Difference
Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference
Jiabao Ji
Yujian Liu
Yang Zhang
Gaowen Liu
Ramana Rao Kompella
Sijia Liu
Shiyu Chang
KELMMU
137
37
0
12 Jun 2024
Large Language Models Must Be Taught to Know What They Don't Know
Large Language Models Must Be Taught to Know What They Don't Know
Sanyam Kapoor
Nate Gruver
Manley Roberts
Katherine Collins
Arka Pal
Umang Bhatt
Adrian Weller
Samuel Dooley
Micah Goldblum
Andrew Gordon Wilson
108
25
0
12 Jun 2024
OLMES: A Standard for Language Model Evaluations
OLMES: A Standard for Language Model Evaluations
Yuling Gu
Oyvind Tafjord
Bailey Kuehl
Dany Haddad
Jesse Dodge
Hannaneh Hajishirzi
ELM
127
20
0
12 Jun 2024
Paraphrasing in Affirmative Terms Improves Negation Understanding
Paraphrasing in Affirmative Terms Improves Negation Understanding
MohammadHossein Rezaei
Eduardo Blanco
77
2
0
11 Jun 2024
TernaryLLM: Ternarized Large Language Model
TernaryLLM: Ternarized Large Language Model
Tianqi Chen
Zhe Li
Weixiang Xu
Zeyu Zhu
Dong Li
Lu Tian
E. Barsoum
Peisong Wang
Jian Cheng
66
7
0
11 Jun 2024
Effectively Compress KV Heads for LLM
Effectively Compress KV Heads for LLM
Hao Yu
Zelan Yang
Shen Li
Yong Li
Jianxin Wu
MQVLM
61
16
0
11 Jun 2024
MoreauPruner: Robust Pruning of Large Language Models against Weight
  Perturbations
MoreauPruner: Robust Pruning of Large Language Models against Weight Perturbations
Zixiao Wang
Jingwei Zhang
Wenqian Zhao
Farzan Farnia
Bei Yu
AAML
71
3
0
11 Jun 2024
Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
Liliang Ren
Yang Liu
Yadong Lu
Yelong Shen
Chen Liang
Weizhu Chen
Mamba
180
69
0
11 Jun 2024
Low-Rank Quantization-Aware Training for LLMs
Low-Rank Quantization-Aware Training for LLMs
Yelysei Bondarenko
Riccardo Del Chiaro
Markus Nagel
MQ
77
14
0
10 Jun 2024
MATES: Model-Aware Data Selection for Efficient Pretraining with Data
  Influence Models
MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models
Zichun Yu
Spandan Das
Chenyan Xiong
121
37
0
10 Jun 2024
ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training
  Multiplication-Less Reparameterization
ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization
Haoran You
Yipin Guo
Yichao Fu
Wei Zhou
Huihong Shi
Xiaofan Zhang
Souvik Kundu
Amir Yazdanbakhsh
Y. Lin
KELM
115
11
0
10 Jun 2024
SuperPos-Prompt: Enhancing Soft Prompt Tuning of Language Models with
  Superposition of Multi Token Embeddings
SuperPos-Prompt: Enhancing Soft Prompt Tuning of Language Models with Superposition of Multi Token Embeddings
MohammadAli SadraeiJavaeri
Ehsaneddin Asgari
A. Mchardy
Hamid R. Rabiee
VLMAAML
68
0
0
07 Jun 2024
Revisiting Catastrophic Forgetting in Large Language Model Tuning
Revisiting Catastrophic Forgetting in Large Language Model Tuning
Hongyu Li
Liang Ding
Meng Fang
Dacheng Tao
CLLKELM
84
19
0
07 Jun 2024
BERTs are Generative In-Context Learners
BERTs are Generative In-Context Learners
David Samuel
85
8
0
07 Jun 2024
PromptFix: Few-shot Backdoor Removal via Adversarial Prompt Tuning
PromptFix: Few-shot Backdoor Removal via Adversarial Prompt Tuning
Tianrong Zhang
Zhaohan Xi
Ting Wang
Prasenjit Mitra
Jinghui Chen
AAMLSILM
77
2
0
06 Jun 2024
Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning
Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning
Naibin Gu
Peng Fu
Xiyu Liu
Bowen Shen
Zheng Lin
Weiping Wang
69
10
0
06 Jun 2024
Does your data spark joy? Performance gains from domain upsampling at
  the end of training
Does your data spark joy? Performance gains from domain upsampling at the end of training
Cody Blakeney
Mansheej Paul
Brett W. Larsen
Sean Owen
Jonathan Frankle
86
20
0
05 Jun 2024
Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large
  Language Models
Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large Language Models
Peijie Dong
Lujun Li
Zhenheng Tang
Xiang Liu
Xinglin Pan
Qiang-qiang Wang
Xiaowen Chu
153
33
0
05 Jun 2024
Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Wentao Guo
Jikai Long
Yimeng Zeng
Zirui Liu
Xinyu Yang
...
Osbert Bastani
Christopher De Sa
Xiaodong Yu
Beidi Chen
Zhaozhuo Xu
93
21
0
05 Jun 2024
Xmodel-LM Technical Report
Xmodel-LM Technical Report
Yichuan Wang
Yang Liu
Yu Yan
Qun Wang
Xucheng Huang
Ling Jiang
OSLMALM
48
1
0
05 Jun 2024
FedMKT: Federated Mutual Knowledge Transfer for Large and Small Language
  Models
FedMKT: Federated Mutual Knowledge Transfer for Large and Small Language Models
Tao Fan
Guoqiang Ma
Yan Kang
Hanlin Gu
Yuanfeng Song
Lixin Fan
Kai Chen
Qiang Yang
106
12
0
04 Jun 2024
OLoRA: Orthonormal Low-Rank Adaptation of Large Language Models
OLoRA: Orthonormal Low-Rank Adaptation of Large Language Models
Kerim Büyükakyüz
AI4CE
71
7
0
03 Jun 2024
Previous
123...91011...212223
Next