ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.11641
  4. Cited By
PIQA: Reasoning about Physical Commonsense in Natural Language

PIQA: Reasoning about Physical Commonsense in Natural Language

26 November 2019
Yonatan Bisk
Rowan Zellers
Ronan Le Bras
Jianfeng Gao
Yejin Choi
    OODLRM
ArXiv (abs)PDFHTML

Papers citing "PIQA: Reasoning about Physical Commonsense in Natural Language"

50 / 1,393 papers shown
Title
AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization
  Method for LLMs
AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs
Alireza Ghaffari
Sharareh Younesian
Vahid Partovi Nia
Boxing Chen
M. Asgharian
MQ
75
0
0
22 May 2024
FlashRAG: A Modular Toolkit for Efficient Retrieval-Augmented Generation Research
FlashRAG: A Modular Toolkit for Efficient Retrieval-Augmented Generation Research
Jiajie Jin
Yutao Zhu
Xinyu Yang
Chenghao Zhang
Zhicheng Dou
Chenghao Zhang
Tong Zhao
Zhao Yang
Zhicheng Dou
Ji-Rong Wen
VLM
165
72
0
22 May 2024
DaVinci at SemEval-2024 Task 9: Few-shot prompting GPT-3.5 for
  Unconventional Reasoning
DaVinci at SemEval-2024 Task 9: Few-shot prompting GPT-3.5 for Unconventional Reasoning
Suyash Vardhan Mathur
Akshett Rai Jindal
Manish Shrivastava
LRM
70
1
0
19 May 2024
Towards Modular LLMs by Building and Reusing a Library of LoRAs
Towards Modular LLMs by Building and Reusing a Library of LoRAs
O. Ostapenko
Zhan Su
Edoardo Ponti
Laurent Charlin
Nicolas Le Roux
Matheus Pereira
Lucas Caccia
Alessandro Sordoni
MoMe
103
37
0
18 May 2024
The Future of Large Language Model Pre-training is Federated
The Future of Large Language Model Pre-training is Federated
Lorenzo Sani
Alexandru Iacob
Zeyu Cao
Bill Marino
Yan Gao
...
Wanru Zhao
William F. Shen
Preslav Aleksandrov
Xinchi Qiu
Nicholas D. Lane
AI4CE
161
21
0
17 May 2024
Layer-Condensed KV Cache for Efficient Inference of Large Language
  Models
Layer-Condensed KV Cache for Efficient Inference of Large Language Models
Haoyi Wu
Kewei Tu
MQ
130
19
0
17 May 2024
Surgical Feature-Space Decomposition of LLMs: Why, When and How?
Surgical Feature-Space Decomposition of LLMs: Why, When and How?
Arnav Chavan
Nahush Lele
Deepak Gupta
70
3
0
17 May 2024
Learnable Privacy Neurons Localization in Language Models
Learnable Privacy Neurons Localization in Language Models
Ruizhe Chen
Tianxiang Hu
Yang Feng
Zuo-Qiang Liu
90
16
0
16 May 2024
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Chameleon Team
MLLM
212
338
0
16 May 2024
Elements of World Knowledge (EWoK): A Cognition-Inspired Framework for Evaluating Basic World Knowledge in Language Models
Elements of World Knowledge (EWoK): A Cognition-Inspired Framework for Evaluating Basic World Knowledge in Language Models
Anna A. Ivanova
Aalok Sathe
Benjamin Lipkin
Unnathi Kumar
S. Radkani
...
Leshem Choshen
Roger Levy
Evelina Fedorenko
Josh Tenenbaum
Jacob Andreas
82
28
0
15 May 2024
Contextual Emotion Recognition using Large Vision Language Models
Contextual Emotion Recognition using Large Vision Language Models
Yasaman Etesam
Özge Nilay Yalçin
Chuxuan Zhang
Angelica Lim
VLM
134
4
0
14 May 2024
Improving Transformers with Dynamically Composable Multi-Head Attention
Improving Transformers with Dynamically Composable Multi-Head Attention
Da Xiao
Qingye Meng
Shengping Li
Xingyuan Yuan
58
4
0
14 May 2024
Zero-Shot Tokenizer Transfer
Zero-Shot Tokenizer Transfer
Benjamin Minixhofer
Edoardo Ponti
Ivan Vulić
VLM
83
13
0
13 May 2024
OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage
  Pruning
OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Dan Qiao
Yi Su
Pinzheng Wang
Jing Ye
Wen Xie
...
Wenliang Chen
Guohong Fu
Guodong Zhou
Qiaoming Zhu
Min Zhang
MQ
60
0
0
09 May 2024
LLMC: Benchmarking Large Language Model Quantization with a Versatile
  Compression Toolkit
LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit
Ruihao Gong
Yang Yong
Shiqiao Gu
Yushi Huang
Chentao Lv
Yunchen Zhang
Xianglong Liu
Dacheng Tao
MQ
112
10
0
09 May 2024
G-SAP: Graph-based Structure-Aware Prompt Learning over Heterogeneous
  Knowledge for Commonsense Reasoning
G-SAP: Graph-based Structure-Aware Prompt Learning over Heterogeneous Knowledge for Commonsense Reasoning
Ruiting Dai
Yuqiao Tan
Lisi Mo
Shuang Liang
Guohao Huo
Jiayi Luo
Yao Cheng
ReLMRALMLRM
67
1
0
09 May 2024
ADELIE: Aligning Large Language Models on Information Extraction
ADELIE: Aligning Large Language Models on Information Extraction
Yunjia Qi
Hao Peng
Xiaozhi Wang
Bin Xu
Lei Hou
Juanzi Li
102
11
0
08 May 2024
ChuXin: 1.6B Technical Report
ChuXin: 1.6B Technical Report
Xiaomin Zhuang
Yufan Jiang
Qiaozhi He
Zhihua Wu
ALM
54
0
0
08 May 2024
Understanding the Capabilities and Limitations of Large Language Models
  for Cultural Commonsense
Understanding the Capabilities and Limitations of Large Language Models for Cultural Commonsense
Siqi Shen
Lajanugen Logeswaran
Moontae Lee
Honglak Lee
Soujanya Poria
Rada Mihalcea
AI4MHLRMELM
113
33
0
07 May 2024
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts
  Language Model
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
DeepSeek-AI
Aixin Liu
Bei Feng
Bin Wang
Bingxuan Wang
...
Zhuoshu Li
Zihan Wang
Zihui Gu
Zilin Li
Ziwei Xie
MoE
170
500
0
07 May 2024
KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference
  with Coupled Quantization
KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization
Tianyi Zhang
Jonah Yi
Zhaozhuo Xu
Anshumali Shrivastava
MQ
68
32
0
07 May 2024
QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving
QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving
Chengyue Wu
Haotian Tang
Shang Yang
Zhekai Zhang
Guangxuan Xiao
Chuang Gan
Song Han
172
98
0
07 May 2024
Quantifying the Capabilities of LLMs across Scale and Precision
Quantifying the Capabilities of LLMs across Scale and Precision
Sher Badshah
Hassan Sajjad
74
14
0
06 May 2024
Lory: Fully Differentiable Mixture-of-Experts for Autoregressive
  Language Model Pre-training
Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training
Zexuan Zhong
Mengzhou Xia
Danqi Chen
Mike Lewis
MoE
106
19
0
06 May 2024
WDMoE: Wireless Distributed Large Language Models with Mixture of
  Experts
WDMoE: Wireless Distributed Large Language Models with Mixture of Experts
Nan Xue
Yaping Sun
Zhiyong Chen
Meixia Tao
Xiaodong Xu
Liang Qian
Shuguang Cui
Ping Zhang
MoE
70
9
0
06 May 2024
Learning from Students: Applying t-Distributions to Explore Accurate and
  Efficient Formats for LLMs
Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs
Jordan Dotzel
Yuzong Chen
Bahaa Kotb
Sushma Prasad
Gang Wu
Sheng Li
Mohamed S. Abdelfattah
Zhiru Zhang
82
9
0
06 May 2024
To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning
  in Large Language Models
To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models
George-Octavian Barbulescu
Peter Triantafillou
MU
112
23
0
06 May 2024
Octopi: Object Property Reasoning with Large Tactile-Language Models
Octopi: Object Property Reasoning with Large Tactile-Language Models
Samson Yu
Kelvin Lin
Anxing Xiao
Jiafei Duan
Harold Soh
LRM
106
31
0
05 May 2024
Get more for less: Principled Data Selection for Warming Up Fine-Tuning
  in LLMs
Get more for less: Principled Data Selection for Warming Up Fine-Tuning in LLMs
Feiyang Kang
H. Just
Yifan Sun
Himanshu Jahagirdar
Yuanzhi Zhang
Rongxing Du
Anit Kumar Sahu
Ruoxi Jia
102
22
0
05 May 2024
Dependency-Aware Semi-Structured Sparsity: Declining Roles of Outliers
  in Pruning GLU-based LLMs
Dependency-Aware Semi-Structured Sparsity: Declining Roles of Outliers in Pruning GLU-based LLMs
Zhiyu Guo
Hidetaka Kamigaito
Taro Wanatnabe
37
1
0
03 May 2024
Creative Problem Solving in Large Language and Vision Models -- What
  Would it Take?
Creative Problem Solving in Large Language and Vision Models -- What Would it Take?
Lakshmi Nair
Evana Gizzi
Jivko Sinapov
MLLM
128
4
0
02 May 2024
DynaMo: Accelerating Language Model Inference with Dynamic Multi-Token
  Sampling
DynaMo: Accelerating Language Model Inference with Dynamic Multi-Token Sampling
Shikhar Tuli
Chi-Heng Lin
Yen-Chang Hsu
N. Jha
Yilin Shen
Hongxia Jin
AI4CE
50
3
0
01 May 2024
When Quantization Affects Confidence of Large Language Models?
When Quantization Affects Confidence of Large Language Models?
Irina Proskurina
Luc Brun
Guillaume Metzler
Julien Velcin
MQ
122
2
0
01 May 2024
CookingSense: A Culinary Knowledgebase with Multidisciplinary Assertions
CookingSense: A Culinary Knowledgebase with Multidisciplinary Assertions
Donghee Choi
Mogan Gim
Donghyeon Park
Mujeen Sung
Hyunjae Kim
Jaewoo Kang
Jihun Choi
77
1
0
01 May 2024
Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models
Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models
Leonardo Ranaldi
André Freitas
LRMReLM
88
16
0
01 May 2024
AdaMoLE: Fine-Tuning Large Language Models with Adaptive Mixture of
  Low-Rank Adaptation Experts
AdaMoLE: Fine-Tuning Large Language Models with Adaptive Mixture of Low-Rank Adaptation Experts
Zefang Liu
Jiahua Luo
MoEKELM
85
13
0
01 May 2024
Better & Faster Large Language Models via Multi-token Prediction
Better & Faster Large Language Models via Multi-token Prediction
Fabian Gloeckle
Badr Youbi Idrissi
Baptiste Rozière
David Lopez-Paz
Gabriele Synnaeve
114
121
0
30 Apr 2024
Time Machine GPT
Time Machine GPT
Felix Drinkall
Eghbal Rahimikia
J. Pierrehumbert
Stefan Zohren
AI4TSAI4CEKELMSyDa
85
4
0
29 Apr 2024
SOUL: Unlocking the Power of Second-Order Optimization for LLM
  Unlearning
SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning
Jinghan Jia
Yihua Zhang
Yimeng Zhang
Jiancheng Liu
Bharat Runwal
James Diffenderfer
B. Kailkhura
Sijia Liu
MU
194
50
0
28 Apr 2024
Scaffold-BPE: Enhancing Byte Pair Encoding with Simple and Effective
  Scaffold Token Removal
Scaffold-BPE: Enhancing Byte Pair Encoding with Simple and Effective Scaffold Token Removal
Haoran Lian
Yizhe Xiong
Jianwei Niu
Shasha Mo
Zhenpeng Su
Zijia Lin
Peng Liu
Hui Chen
Guiguang Ding
59
3
0
27 Apr 2024
Temporal Scaling Law for Large Language Models
Temporal Scaling Law for Large Language Models
Yizhe Xiong
Xiansheng Chen
Xin Ye
Hui Chen
Zijia Lin
...
Zhenpeng Su
Wei Huang
Jianwei Niu
Jiawei Han
Guiguang Ding
120
10
0
27 Apr 2024
Text Quality-Based Pruning for Efficient Training of Language Models
Text Quality-Based Pruning for Efficient Training of Language Models
Vasu Sharma
Karthik Padthe
Newsha Ardalani
Kushal Tirumala
Russell Howes
...
Po-Yao Huang
Shang-Wen Li
Armen Aghajanyan
Gargi Ghosh
Luke Zettlemoyer
120
6
0
26 Apr 2024
Continual Learning of Large Language Models: A Comprehensive Survey
Continual Learning of Large Language Models: A Comprehensive Survey
Haizhou Shi
Zihao Xu
Hengyi Wang
Weiyi Qin
Wenyuan Wang
Yibin Wang
Zifeng Wang
Sayna Ebrahimi
Hao Wang
CLLKELMLRM
160
88
0
25 Apr 2024
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Mostafa Elhoushi
Akshat Shrivastava
Diana Liskovich
Basil Hosmer
Bram Wasti
...
Saurabh Agarwal
Ahmed Roman
Ahmed Aly
Beidi Chen
Carole-Jean Wu
LRM
112
110
0
25 Apr 2024
Nyonic Technical Report
Nyonic Technical Report
Junfeng Tian
Rui Wang
Cong Li
Yudong Zhou
Jun Liu
Jun Wang
58
1
0
24 Apr 2024
Insights into Alignment: Evaluating DPO and its Variants Across Multiple Tasks
Insights into Alignment: Evaluating DPO and its Variants Across Multiple Tasks
Amir Saeidi
Shivanshu Verma
Chitta Baral
Chitta Baral
ALM
110
26
0
23 Apr 2024
OpenELM: An Efficient Language Model Family with Open Training and
  Inference Framework
OpenELM: An Efficient Language Model Family with Open Training and Inference Framework
Sachin Mehta
Mohammad Hossein Sekhavat
Qingqing Cao
Maxwell Horton
Yanzi Jin
...
Iman Mirzadeh
Mahyar Najibi
Dmitry Belenko
Peter Zatloukal
Mohammad Rastegari
OSLMAIFin
108
61
0
22 Apr 2024
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your
  Phone
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Marah Abdin
Sam Ade Jacobs
A. A. Awan
J. Aneja
Ahmed Hassan Awadallah
...
Li Zhang
Yi Zhang
Yue Zhang
Yunan Zhang
Xiren Zhou
LRMALM
197
1,274
0
22 Apr 2024
An empirical study of LLaMA3 quantization: from LLMs to MLLMs
An empirical study of LLaMA3 quantization: from LLMs to MLLMs
Wei Huang
Xingyu Zheng
Xudong Ma
Haotong Qin
Chengtao Lv
Hong Chen
Jie Luo
Xiaojuan Qi
Xianglong Liu
Michele Magno
MQ
152
42
0
22 Apr 2024
SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense
SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense
Yifan Jiang
Filip Ilievski
Kaixin Ma
LRM
112
30
0
22 Apr 2024
Previous
123...151617...262728
Next