ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.11641
  4. Cited By
PIQA: Reasoning about Physical Commonsense in Natural Language

PIQA: Reasoning about Physical Commonsense in Natural Language

26 November 2019
Yonatan Bisk
Rowan Zellers
Ronan Le Bras
Jianfeng Gao
Yejin Choi
    OODLRM
ArXiv (abs)PDFHTML

Papers citing "PIQA: Reasoning about Physical Commonsense in Natural Language"

50 / 1,393 papers shown
Title
Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning
Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning
Somnath Basu Roy Chowdhury
Krzysztof Choromanski
Arijit Sehanobish
Avinava Dubey
Snigdha Chaturvedi
MU
108
10
0
24 Jun 2024
Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large
  Language Models without Training through Attention Calibration
Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Zhongzhi Yu
Zheng Wang
Yonggan Fu
Huihong Shi
Khalid Shaikh
Yingyan Celine Lin
118
25
0
22 Jun 2024
RankAdaptor: Hierarchical Dynamic Low-Rank Adaptation for Structural
  Pruned LLMs
RankAdaptor: Hierarchical Dynamic Low-Rank Adaptation for Structural Pruned LLMs
Changhai Zhou
Shijie Han
Shiyang Zhang
Shichao Weng
Zekai Liu
Cheng Jin
82
1
0
22 Jun 2024
ICLEval: Evaluating In-Context Learning Ability of Large Language Models
ICLEval: Evaluating In-Context Learning Ability of Large Language Models
Wentong Chen
Yankai Lin
ZhenHao Zhou
HongYun Huang
Yantao Jia
Bo Zhao
Ji-Rong Wen
ELM
79
4
0
21 Jun 2024
Efficient Continual Pre-training by Mitigating the Stability Gap
Efficient Continual Pre-training by Mitigating the Stability Gap
Yiduo Guo
Jie Fu
Huishuai Zhang
Dongyan Zhao
Songlin Yang
79
15
0
21 Jun 2024
Instruction Pre-Training: Language Models are Supervised Multitask
  Learners
Instruction Pre-Training: Language Models are Supervised Multitask Learners
Daixuan Cheng
Yuxian Gu
Shaohan Huang
Junyu Bi
Minlie Huang
Furu Wei
SyDa
137
24
0
20 Jun 2024
Protecting Privacy Through Approximating Optimal Parameters for Sequence
  Unlearning in Language Models
Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models
Dohyun Lee
Daniel Rim
Minseok Choi
Jaegul Choo
PILMMU
104
5
0
20 Jun 2024
Improving Visual Commonsense in Language Models via Multiple Image
  Generation
Improving Visual Commonsense in Language Models via Multiple Image Generation
Guy Yariv
Idan Schwartz
Yossi Adi
Sagie Benaim
VLMLRM
48
0
0
19 Jun 2024
BiLD: Bi-directional Logits Difference Loss for Large Language Model
  Distillation
BiLD: Bi-directional Logits Difference Loss for Large Language Model Distillation
Minchong Li
Feng Zhou
Xiaohui Song
61
3
0
19 Jun 2024
Mitigating Social Biases in Language Models through Unlearning
Mitigating Social Biases in Language Models through Unlearning
O. Dige
Diljot Singh
Tsz Fung Yau
Qixuan Zhang
Borna Bolandraftar
Xiaodan Zhu
Faiza Khan Khattak
MoMeMU
82
5
0
19 Jun 2024
AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts
  Language Models
AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models
Zihao Zeng
Yibo Miao
Hongcheng Gao
Hao Zhang
Zhijie Deng
MoE
119
11
0
19 Jun 2024
BoA: Attention-aware Post-training Quantization without Backpropagation
BoA: Attention-aware Post-training Quantization without Backpropagation
Junhan Kim
Ho-Young Kim
Eulrang Cho
Chungman Lee
Joonyoung Kim
Yongkweon Jeon
MQ
124
0
0
19 Jun 2024
LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional
  Adaptation
LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation
Seyedarmin Azizi
Souvik Kundu
Massoud Pedram
57
9
0
18 Jun 2024
Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for
  Large Language Models
Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
Dongwon Jo
Taesu Kim
Yulhwa Kim
Jae-Joon Kim
124
5
0
18 Jun 2024
UBench: Benchmarking Uncertainty in Large Language Models with Multiple Choice Questions
UBench: Benchmarking Uncertainty in Large Language Models with Multiple Choice Questions
Xunzhi Wang
Zhuowei Zhang
Qiongyu Li
Gaonan Chen
Mengting Hu
Zhixin Han
Bitong Luo
Zhiyu li
Hang Gao
Mengting Hu
ELM
107
3
0
18 Jun 2024
Self-MoE: Towards Compositional Large Language Models with
  Self-Specialized Experts
Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts
Junmo Kang
Leonid Karlinsky
Hongyin Luo
Zhen Wang
Jacob A. Hansen
James Glass
David D. Cox
Yikang Shen
Rogerio Feris
Alan Ritter
MoMeMoE
93
11
0
17 Jun 2024
LiLiuM: eBay's Large Language Models for e-commerce
LiLiuM: eBay's Large Language Models for e-commerce
Christian Herold
Michael Kozielski
Leonid Ekimov
Pavel Petrushkov
P. Vandenbussche
Shahram Khadivi
93
3
0
17 Jun 2024
Split, Unlearn, Merge: Leveraging Data Attributes for More Effective
  Unlearning in LLMs
Split, Unlearn, Merge: Leveraging Data Attributes for More Effective Unlearning in LLMs
S. Kadhe
Farhan Ahmed
Dennis Wei
Nathalie Baracaldo
Inkit Padhi
MoMeMU
90
8
0
17 Jun 2024
MEMLA: Enhancing Multilingual Knowledge Editing with Neuron-Masked
  Low-Rank Adaptation
MEMLA: Enhancing Multilingual Knowledge Editing with Neuron-Masked Low-Rank Adaptation
Jiakuan Xie
Pengfei Cao
Yuheng Chen
Yubo Chen
Kang Liu
Jun Zhao
KELM
98
6
0
17 Jun 2024
CodeGemma: Open Code Models Based on Gemma
CodeGemma: Open Code Models Based on Gemma
CodeGemma Team
Heri Zhao
Jeffrey Hui
Joshua Howland
Nam Nguyen
...
Ale Jakse Hartman
Bin Ni
Kathy Korevec
Kelly Schaefer
Scott Huffman
VLM
115
129
0
17 Jun 2024
RUPBench: Benchmarking Reasoning Under Perturbations for Robustness
  Evaluation in Large Language Models
RUPBench: Benchmarking Reasoning Under Perturbations for Robustness Evaluation in Large Language Models
Yuqing Wang
Yun Zhao
LRMAAMLELM
88
2
0
16 Jun 2024
Eliminating Biased Length Reliance of Direct Preference Optimization via
  Down-Sampled KL Divergence
Eliminating Biased Length Reliance of Direct Preference Optimization via Down-Sampled KL Divergence
Junru Lu
Jiazheng Li
Siyu An
Meng Zhao
Yulan He
Di Yin
Xing Sun
94
20
0
16 Jun 2024
On the Role of Entity and Event Level Conceptualization in Generalizable
  Reasoning: A Survey of Tasks, Methods, Applications, and Future Directions
On the Role of Entity and Event Level Conceptualization in Generalizable Reasoning: A Survey of Tasks, Methods, Applications, and Future Directions
Weiqi Wang
Tianqing Fang
Haochen Shi
Baixuan Xu
Wenxuan Ding
...
Wei Fan
Jiaxin Bai
Haoran Li
Xin Liu
Yangqiu Song
LRM
111
3
0
16 Jun 2024
Optimization of Armv9 architecture general large language model
  inference performance based on Llama.cpp
Optimization of Armv9 architecture general large language model inference performance based on Llama.cpp
Longhao Chen
Yina Zhao
Qiangjun Xie
Qinghua Sheng
28
0
0
16 Jun 2024
RoseLoRA: Row and Column-wise Sparse Low-rank Adaptation of Pre-trained
  Language Model for Knowledge Editing and Fine-tuning
RoseLoRA: Row and Column-wise Sparse Low-rank Adaptation of Pre-trained Language Model for Knowledge Editing and Fine-tuning
Haoyu Wang
Tianci Liu
Ruirui Li
Monica Cheng
Tuo Zhao
Jing Gao
65
11
0
16 Jun 2024
Mixture-of-Subspaces in Low-Rank Adaptation
Mixture-of-Subspaces in Low-Rank Adaptation
Taiqiang Wu
Jiahao Wang
Zhe Zhao
Ngai Wong
146
27
0
16 Jun 2024
CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language
  Model Pre-training
CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training
David Brandfonbrener
Hanlin Zhang
Andreas Kirsch
Jonathan Richard Schwarz
Sham Kakade
115
7
0
15 Jun 2024
BlockPruner: Fine-grained Pruning for Large Language Models
BlockPruner: Fine-grained Pruning for Large Language Models
Longguang Zhong
Fanqi Wan
Ruijun Chen
Xiaojun Quan
Liangzhi Li
97
10
0
15 Jun 2024
Quantifying Variance in Evaluation Benchmarks
Quantifying Variance in Evaluation Benchmarks
Lovish Madaan
Aaditya K. Singh
Rylan Schaeffer
Andrew Poulton
Sanmi Koyejo
Pontus Stenetorp
Sharan Narang
Dieuwke Hupkes
106
15
0
14 Jun 2024
GenQA: Generating Millions of Instructions from a Handful of Prompts
GenQA: Generating Millions of Instructions from a Handful of Prompts
Jiuhai Chen
Rifaa Qadri
Yuxin Wen
Neel Jain
John Kirchenbauer
Dinesh Manocha
Tom Goldstein
ALM
154
24
0
14 Jun 2024
QQQ: Quality Quattuor-Bit Quantization for Large Language Models
QQQ: Quality Quattuor-Bit Quantization for Large Language Models
Ying Zhang
Peng Zhang
Mincong Huang
Jingyang Xiang
Yujie Wang
Chao Wang
Yineng Zhang
Lei Yu
Chuan Liu
Wei Lin
VLMMQ
70
6
0
14 Jun 2024
MLKV: Multi-Layer Key-Value Heads for Memory Efficient Transformer
  Decoding
MLKV: Multi-Layer Key-Value Heads for Memory Efficient Transformer Decoding
Zayd Muhammad Kawakibi Zuhri
Muhammad Farid Adilazuarda
Ayu Purwarianti
Alham Fikri Aji
95
10
0
13 Jun 2024
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning
Hanqing Wang
Zeguan Xiao
Shuo Wang
Guanhua Chen
Guanhua Chen
108
27
0
13 Jun 2024
Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning
  Framework from Logit Difference
Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference
Jiabao Ji
Yujian Liu
Yang Zhang
Gaowen Liu
Ramana Rao Kompella
Sijia Liu
Shiyu Chang
KELMMU
137
37
0
12 Jun 2024
Large Language Models Must Be Taught to Know What They Don't Know
Large Language Models Must Be Taught to Know What They Don't Know
Sanyam Kapoor
Nate Gruver
Manley Roberts
Katherine Collins
Arka Pal
Umang Bhatt
Adrian Weller
Samuel Dooley
Micah Goldblum
Andrew Gordon Wilson
108
25
0
12 Jun 2024
An Empirical Study of Mamba-based Language Models
An Empirical Study of Mamba-based Language Models
R. Waleffe
Wonmin Byeon
Duncan Riach
Brandon Norick
V. Korthikanti
...
Vartika Singh
Jared Casper
Jan Kautz
Mohammad Shoeybi
Bryan Catanzaro
125
79
0
12 Jun 2024
ALPS: Improved Optimization for Highly Sparse One-Shot Pruning for Large
  Language Models
ALPS: Improved Optimization for Highly Sparse One-Shot Pruning for Large Language Models
Xiang Meng
Kayhan Behdin
Haoyue Wang
Rahul Mazumder
76
6
0
12 Jun 2024
OLMES: A Standard for Language Model Evaluations
OLMES: A Standard for Language Model Evaluations
Yuling Gu
Oyvind Tafjord
Bailey Kuehl
Dany Haddad
Jesse Dodge
Hannaneh Hajishirzi
ELM
131
20
0
12 Jun 2024
Open-LLM-Leaderboard: From Multi-choice to Open-style Questions for LLMs
  Evaluation, Benchmark, and Arena
Open-LLM-Leaderboard: From Multi-choice to Open-style Questions for LLMs Evaluation, Benchmark, and Arena
Aidar Myrzakhan
Sondos Mahmoud Bsharat
Zhiqiang Shen
ELM
75
39
0
11 Jun 2024
Paraphrasing in Affirmative Terms Improves Negation Understanding
Paraphrasing in Affirmative Terms Improves Negation Understanding
MohammadHossein Rezaei
Eduardo Blanco
79
2
0
11 Jun 2024
When Linear Attention Meets Autoregressive Decoding: Towards More
  Effective and Efficient Linearized Large Language Models
When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models
Haoran You
Yichao Fu
Zheng Wang
Amir Yazdanbakhsh
Yingyan Celine Lin
133
4
0
11 Jun 2024
TernaryLLM: Ternarized Large Language Model
TernaryLLM: Ternarized Large Language Model
Tianqi Chen
Zhe Li
Weixiang Xu
Zeyu Zhu
Dong Li
Lu Tian
E. Barsoum
Peisong Wang
Jian Cheng
73
7
0
11 Jun 2024
Effectively Compress KV Heads for LLM
Effectively Compress KV Heads for LLM
Hao Yu
Zelan Yang
Shen Li
Shen Li
Jianxin Wu
MQVLM
64
16
0
11 Jun 2024
MoreauPruner: Robust Pruning of Large Language Models against Weight
  Perturbations
MoreauPruner: Robust Pruning of Large Language Models against Weight Perturbations
Zixiao Wang
Jingwei Zhang
Wenqian Zhao
Farzan Farnia
Bei Yu
AAML
71
3
0
11 Jun 2024
Flextron: Many-in-One Flexible Large Language Model
Flextron: Many-in-One Flexible Large Language Model
Ruisi Cai
Saurav Muralidharan
Greg Heinrich
Hongxu Yin
Zhangyang Wang
Jan Kautz
Pavlo Molchanov
85
14
0
11 Jun 2024
Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
Liliang Ren
Yang Liu
Yadong Lu
Yelong Shen
Chen Liang
Weizhu Chen
Mamba
182
69
0
11 Jun 2024
Low-Rank Quantization-Aware Training for LLMs
Low-Rank Quantization-Aware Training for LLMs
Yelysei Bondarenko
Riccardo Del Chiaro
Markus Nagel
MQ
77
14
0
10 Jun 2024
MATES: Model-Aware Data Selection for Efficient Pretraining with Data
  Influence Models
MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models
Zichun Yu
Spandan Das
Chenyan Xiong
128
37
0
10 Jun 2024
Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated
  Parameters
Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters
Yixin Song
Haotong Xie
Zhengyan Zhang
Bo Wen
Li Ma
Zeyu Mi
Haibo Chen
MoE
163
25
0
10 Jun 2024
BERTs are Generative In-Context Learners
BERTs are Generative In-Context Learners
David Samuel
85
8
0
07 Jun 2024
Previous
123...131415...262728
Next