ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10044
  4. Cited By
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

24 May 2019
Christopher Clark
Kenton Lee
Ming-Wei Chang
Tom Kwiatkowski
Michael Collins
Kristina Toutanova
ArXivPDFHTML

Papers citing "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions"

50 / 1,040 papers shown
Title
Router-Tuning: A Simple and Effective Approach for Enabling Dynamic-Depth in Transformers
Router-Tuning: A Simple and Effective Approach for Enabling Dynamic-Depth in Transformers
Shwai He
Tao Ge
Guoheng Sun
Bowei Tian
Xiaoyang Wang
Ang Li
MoE
54
1
0
17 Oct 2024
Identifying Task Groupings for Multi-Task Learning Using Pointwise
  V-Usable Information
Identifying Task Groupings for Multi-Task Learning Using Pointwise V-Usable Information
Yingya Li
Timothy A. Miller
Steven Bethard
G. Savova
26
0
0
16 Oct 2024
Evaluating the Instruction-following Abilities of Language Models using Knowledge Tasks
Evaluating the Instruction-following Abilities of Language Models using Knowledge Tasks
Rudra Murthy
Prince Kumar
Praveen Venkateswaran
Danish Contractor
KELM
ALM
ELM
34
1
0
16 Oct 2024
Semantics-Adaptive Activation Intervention for LLMs via Dynamic Steering Vectors
Semantics-Adaptive Activation Intervention for LLMs via Dynamic Steering Vectors
Weixuan Wang
J. Yang
Wei Peng
LLMSV
28
3
0
16 Oct 2024
MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the
  Hints from Its Router
MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router
Yanyue Xie
Zhi Zhang
Ding Zhou
Cong Xie
Ziang Song
Xin Liu
Yanzhi Wang
Xue Lin
An Xu
LLMAG
40
3
0
15 Oct 2024
MoH: Multi-Head Attention as Mixture-of-Head Attention
MoH: Multi-Head Attention as Mixture-of-Head Attention
Peng Jin
Bo Zhu
Li Yuan
Shuicheng Yan
MoE
31
13
0
15 Oct 2024
Layer-wise Importance Matters: Less Memory for Better Performance in
  Parameter-efficient Fine-tuning of Large Language Models
Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models
Kai Yao
P. Gao
Lichun Li
Yuan Zhao
Xiaofeng Wang
Wei Wang
Jianke Zhu
26
1
0
15 Oct 2024
In-context KV-Cache Eviction for LLMs via Attention-Gate
In-context KV-Cache Eviction for LLMs via Attention-Gate
Zihao Zeng
Bokai Lin
Tianqi Hou
Hao Zhang
Zhijie Deng
38
1
0
15 Oct 2024
LLM Unlearning via Loss Adjustment with Only Forget Data
LLM Unlearning via Loss Adjustment with Only Forget Data
Yaxuan Wang
Jiaheng Wei
Chris Liu
Jinlong Pang
Qiang Liu
A. Shah
Yujia Bao
Yang Liu
Wei Wei
KELM
MU
40
8
0
14 Oct 2024
AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved
  Layer-wise Pruning of Large Language Models
AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Haiquan Lu
Yefan Zhou
Shiwei Liu
Zhangyang Wang
Michael W. Mahoney
Yaoqing Yang
34
0
0
14 Oct 2024
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column
  Updates
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates
Md. Kowsher
Tara Esmaeilbeig
Chun-Nam Yu
Mojtaba Soltanalian
Niloofar Yousefi
32
0
0
14 Oct 2024
MoIN: Mixture of Introvert Experts to Upcycle an LLM
MoIN: Mixture of Introvert Experts to Upcycle an LLM
Ajinkya Tejankar
K. Navaneet
Ujjawal Panchal
Kossar Pourahmadi
Hamed Pirsiavash
MoE
29
0
0
13 Oct 2024
Reverse Modeling in Large Language Models
Reverse Modeling in Large Language Models
S. Yu
Yuanchen Xu
Cunxiao Du
Yanying Zhou
Minghui Qiu
Q. Sun
Hao Zhang
Jiawei Wu
39
2
0
13 Oct 2024
ELICIT: LLM Augmentation via External In-Context Capability
ELICIT: LLM Augmentation via External In-Context Capability
Futing Wang
Jianhao Yan
Yue Zhang
Tao Lin
44
0
0
12 Oct 2024
Zeroth-Order Fine-Tuning of LLMs in Random Subspaces
Zeroth-Order Fine-Tuning of LLMs in Random Subspaces
Ziming Yu
Pan Zhou
Sike Wang
Jia Li
Hua Huang
36
0
0
11 Oct 2024
SPORTU: A Comprehensive Sports Understanding Benchmark for Multimodal Large Language Models
SPORTU: A Comprehensive Sports Understanding Benchmark for Multimodal Large Language Models
H. Xia
Zhengbang Yang
Junbo Zou
Rhys Tracy
Yuqing Wang
...
Xun Shao
Zhuoqing Xie
Yuan-fang Wang
Weining Shen
Hanjie Chen
ReLM
LRM
ELM
42
2
0
11 Oct 2024
Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining
Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining
Tianyi Bai
Ling Yang
Zhen Hao Wong
Jiahui Peng
Xinlin Zhuang
...
Lijun Wu
Jiantao Qiu
Wentao Zhang
Binhang Yuan
Conghui He
LLMAG
23
4
0
10 Oct 2024
SLIM: Let LLM Learn More and Forget Less with Soft LoRA and Identity Mixture
SLIM: Let LLM Learn More and Forget Less with Soft LoRA and Identity Mixture
Jiayi Han
Liang Du
Hongwei Du
Xiangguo Zhou
Yiwen Wu
Weibo Zheng
Donghong Han
CLL
MoMe
MoE
38
2
0
10 Oct 2024
ACCEPT: Adaptive Codebook for Composite and Efficient Prompt Tuning
ACCEPT: Adaptive Codebook for Composite and Efficient Prompt Tuning
Yu-Chen Lin
Wei-Hua Li
Jun-Cheng Chen
Chu-Song Chen
35
1
0
10 Oct 2024
DemoShapley: Valuation of Demonstrations for In-Context Learning
DemoShapley: Valuation of Demonstrations for In-Context Learning
Shan Xie
Man Luo
Chadly Daniel Stern
Jundong Li
Lu Cheng
37
1
0
10 Oct 2024
COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act
COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act
Philipp Guldimann
Alexander Spiridonov
Robin Staab
Nikola Jovanović
Mark Vero
...
Mislav Balunović
Nikola Konstantinov
Pavol Bielik
Petar Tsankov
Martin Vechev
ELM
53
4
0
10 Oct 2024
QuAILoRA: Quantization-Aware Initialization for LoRA
QuAILoRA: Quantization-Aware Initialization for LoRA
Neal Lawton
Aishwarya Padmakumar
Judith Gaspers
Jack FitzGerald
Anoop Kumar
Greg Ver Steeg
Aram Galstyan
MQ
33
0
0
09 Oct 2024
MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation
  Experts
MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts
Peng Jin
Bo Zhu
Li Yuan
Shuicheng Yan
MoE
32
4
0
09 Oct 2024
One Initialization to Rule them All: Fine-tuning via Explained Variance
  Adaptation
One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation
Fabian Paischer
Lukas Hauzenberger
Thomas Schmied
Benedikt Alkin
Marc Peter Deisenroth
Sepp Hochreiter
37
4
0
09 Oct 2024
Capturing Bias Diversity in LLMs
Capturing Bias Diversity in LLMs
Purva Prasad Gosavi
Vaishnavi Murlidhar Kulkarni
Alan F. Smeaton
31
0
0
09 Oct 2024
Chip-Tuning: Classify Before Language Models Say
Chip-Tuning: Classify Before Language Models Say
Fangwei Zhu
Dian Li
Jiajun Huang
Gang Liu
Hui Wang
Zhifang Sui
30
0
0
09 Oct 2024
Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and
  Performance of SGD for Fine-Tuning Language Models
Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models
Zeman Li
Xinwei Zhang
Peilin Zhong
Yuan Deng
Meisam Razaviyayn
Vahab Mirrokni
25
2
0
09 Oct 2024
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs
Ruijia Niu
D. Wu
Rose Yu
Yi Ma
33
1
0
09 Oct 2024
Data Selection via Optimal Control for Language Models
Data Selection via Optimal Control for Language Models
Yuxian Gu
Li Dong
Hongning Wang
Y. Hao
Qingxiu Dong
Furu Wei
Minlie Huang
AI4CE
55
5
0
09 Oct 2024
QERA: an Analytical Framework for Quantization Error Reconstruction
QERA: an Analytical Framework for Quantization Error Reconstruction
Cheng Zhang
Jeffrey T. H. Wong
Can Xiao
George A. Constantinides
Yiren Zhao
MQ
47
2
0
08 Oct 2024
Attribute Controlled Fine-tuning for Large Language Models: A Case Study
  on Detoxification
Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification
Tao Meng
Ninareh Mehrabi
Palash Goyal
Anil Ramakrishna
Aram Galstyan
Richard Zemel
Kai-Wei Chang
Rahul Gupta
Charith Peris
25
1
0
07 Oct 2024
Superficial Safety Alignment Hypothesis
Superficial Safety Alignment Hypothesis
Jianwei Li
Jung-Eun Kim
24
1
0
07 Oct 2024
ESPACE: Dimensionality Reduction of Activations for Model Compression
ESPACE: Dimensionality Reduction of Activations for Model Compression
Charbel Sakr
Brucek Khailany
33
3
0
07 Oct 2024
Cookbook: A framework for improving LLM generative abilities via
  programmatic data generating templates
Cookbook: A framework for improving LLM generative abilities via programmatic data generating templates
A. Narayan
Mayee F. Chen
Kush S. Bhatia
Christopher Ré
SyDa
36
3
0
07 Oct 2024
Initialization of Large Language Models via Reparameterization to
  Mitigate Loss Spikes
Initialization of Large Language Models via Reparameterization to Mitigate Loss Spikes
Kosuke Nishida
Kyosuke Nishida
Kuniko Saito
33
1
0
07 Oct 2024
LPZero: Language Model Zero-cost Proxy Search from Zero
LPZero: Language Model Zero-cost Proxy Search from Zero
Peijie Dong
Lujun Li
Xiang Liu
Zhenheng Tang
Xuebo Liu
Qiang Wang
Xiaowen Chu
137
2
0
07 Oct 2024
Continuous Approximations for Improving Quantization Aware Training of
  LLMs
Continuous Approximations for Improving Quantization Aware Training of LLMs
He Li
Jianhang Hong
Yuanzhuo Wu
Snehal Adbol
Zonglin Li
MQ
26
1
0
06 Oct 2024
How Much Can We Forget about Data Contamination?
How Much Can We Forget about Data Contamination?
Sebastian Bordt
Suraj Srinivas
Valentyn Boreiko
U. V. Luxburg
48
1
0
04 Oct 2024
EXAQ: Exponent Aware Quantization For LLMs Acceleration
EXAQ: Exponent Aware Quantization For LLMs Acceleration
Moran Shkolnik
Maxim Fishman
Brian Chmiel
Hilla Ben-Yaacov
Ron Banner
Kfir Y. Levy
MQ
26
0
0
04 Oct 2024
Self-Powered LLM Modality Expansion for Large Speech-Text Models
Self-Powered LLM Modality Expansion for Large Speech-Text Models
Tengfei Yu
Xuebo Liu
Zhiyi Hou
Liang Ding
Dacheng Tao
Min Zhang
32
0
0
04 Oct 2024
ARB-LLM: Alternating Refined Binarizations for Large Language Models
ARB-LLM: Alternating Refined Binarizations for Large Language Models
Zhiteng Li
Xinyu Yan
Tianao Zhang
Haotong Qin
Dong Xie
Jiang Tian
Zhongchao Shi
Linghe Kong
Yulun Zhang
Xiaokang Yang
MQ
34
2
0
04 Oct 2024
LoRC: Low-Rank Compression for LLMs KV Cache with a Progressive
  Compression Strategy
LoRC: Low-Rank Compression for LLMs KV Cache with a Progressive Compression Strategy
Rongzhi Zhang
Kuang Wang
Liyuan Liu
Shuohang Wang
Hao Cheng
Chao Zhang
Yelong Shen
MQ
26
5
0
04 Oct 2024
Scaling Parameter-Constrained Language Models with Quality Data
Scaling Parameter-Constrained Language Models with Quality Data
Ernie Chang
Matteo Paltenghi
Yang Li
Pin-Jie Lin
Changsheng Zhao
Patrick Huber
Zechun Liu
Rastislav Rabatin
Yangyang Shi
Vikas Chandra
59
1
0
04 Oct 2024
Auto-Demo Prompting: Leveraging Generated Outputs as Demonstrations for
  Enhanced Batch Prompting
Auto-Demo Prompting: Leveraging Generated Outputs as Demonstrations for Enhanced Batch Prompting
Longyu Feng
Mengze Hong
Chen Jason Zhang
39
2
0
02 Oct 2024
Mind Scramble: Unveiling Large Language Model Psychology Via
  Typoglycemia
Mind Scramble: Unveiling Large Language Model Psychology Via Typoglycemia
Miao Yu
Junyuan Mao
Guibin Zhang
Jingheng Ye
Junfeng Fang
Aoxiao Zhong
Yang Liu
Yuxuan Liang
Kun Wang
Qingsong Wen
41
2
0
02 Oct 2024
House of Cards: Massive Weights in LLMs
House of Cards: Massive Weights in LLMs
Jaehoon Oh
Seungjun Shin
Dokwan Oh
40
1
0
02 Oct 2024
Fisher Information-based Efficient Curriculum Federated Learning with
  Large Language Models
Fisher Information-based Efficient Curriculum Federated Learning with Large Language Models
Ji Liu
Jiaxiang Ren
Ruoming Jin
Zijie Zhang
Yang Zhou
P. Valduriez
Dejing Dou
FedML
42
1
0
30 Sep 2024
Wait, but Tylenol is Acetaminophen... Investigating and Improving
  Language Models' Ability to Resist Requests for Misinformation
Wait, but Tylenol is Acetaminophen... Investigating and Improving Language Models' Ability to Resist Requests for Misinformation
Shan Chen
Mingye Gao
Kuleen Sasse
Thomas Hartvigsen
Brian Anthony
Lizhou Fan
Hugo J. W. L. Aerts
Jack Gallifant
Danielle S. Bitterman
LM&MA
33
0
0
30 Sep 2024
Rotated Runtime Smooth: Training-Free Activation Smoother for accurate
  INT4 inference
Rotated Runtime Smooth: Training-Free Activation Smoother for accurate INT4 inference
Ke Yi
Zengke Liu
Jianwei Zhang
Chengyuan Li
Tong Zhang
Junyang Lin
Jingren Zhou
MQ
48
1
0
30 Sep 2024
Hyper-Connections
Hyper-Connections
Defa Zhu
Hongzhi Huang
Zihao Huang
Yutao Zeng
Yunyao Mao
Banggu Wu
Qiyang Min
Xun Zhou
38
4
0
29 Sep 2024
Previous
123456...192021
Next