ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19433
  4. Cited By
Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression
v1v2 (latest)

Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression

26 May 2025
Peijie Dong
Zhenheng Tang
Xiang Liu
Lujun Li
Xiaowen Chu
Bo Li
ArXiv (abs)PDFHTML

Papers citing "Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression"

15 / 115 papers shown
Title
Explanations from Large Language Models Make Small Reasoners Better
Explanations from Large Language Models Make Small Reasoners Better
Shiyang Li
Jianshu Chen
Yelong Shen
Zhiyu Zoey Chen
Xinlu Zhang
...
Jingu Qian
Baolin Peng
Yi Mao
Wenhu Chen
Xifeng Yan
ReLMLRM
92
138
0
13 Oct 2022
FOLIO: Natural Language Reasoning with First-Order Logic
FOLIO: Natural Language Reasoning with First-Order Logic
Simeng Han
Hailey Schoelkopf
Yilun Zhao
Zhenting Qi
Martin Riddell
...
Yingbo Zhou
Caiming Xiong
Rex Ying
Arman Cohan
Dragomir R. Radev
ReLMLRM
105
108
0
02 Sep 2022
Optimal Brain Compression: A Framework for Accurate Post-Training
  Quantization and Pruning
Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning
Elias Frantar
Sidak Pal Singh
Dan Alistarh
MQ
106
243
0
24 Aug 2022
LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient
  Inference in Large-Scale Generative Language Models
LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models
Gunho Park
Baeseong Park
Minsub Kim
Sungjae Lee
Jeonghoon Kim
Beomseok Kwon
S. Kwon
Byeongwook Kim
Youngjoo Lee
Dongsoo Lee
MQ
60
83
0
20 Jun 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLMBDLLRMAI4CE
522
3,721
0
21 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&RoLRMAI4CEReLM
823
9,644
0
28 Jan 2022
Machine Translation Decoding beyond Beam Search
Machine Translation Decoding beyond Beam Search
Rémi Leblond
Jean-Baptiste Alayrac
Laurent Sifre
Miruna Pislar
Jean-Baptiste Lespiau
Ioannis Antonoglou
Karen Simonyan
Oriol Vinyals
70
71
0
12 Apr 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
455
2,120
0
31 Dec 2020
Energy-based Out-of-distribution Detection
Energy-based Out-of-distribution Detection
Weitang Liu
Xiaoyun Wang
John Douglas Owens
Yixuan Li
OODD
271
1,371
0
08 Oct 2020
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language
  Models
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Samuel Gehman
Suchin Gururangan
Maarten Sap
Yejin Choi
Noah A. Smith
163
1,214
0
24 Sep 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
853
42,332
0
28 May 2020
Importance Estimation for Neural Network Pruning
Importance Estimation for Neural Network Pruning
Pavlo Molchanov
Arun Mallya
Stephen Tyree
I. Frosio
Jan Kautz
3DPC
81
885
0
25 Jun 2019
Training Confidence-calibrated Classifiers for Detecting
  Out-of-Distribution Samples
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
Kimin Lee
Honglak Lee
Kibok Lee
Jinwoo Shin
OODD
120
882
0
26 Nov 2017
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
313
6,694
0
08 Jun 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
364
19,723
0
09 Mar 2015
Previous
123