ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.05527
  4. Cited By
GEAR: An Efficient KV Cache Compression Recipe for Near-Lossless
  Generative Inference of LLM

GEAR: An Efficient KV Cache Compression Recipe for Near-Lossless Generative Inference of LLM

8 March 2024
Hao Kang
Qingru Zhang
Souvik Kundu
Geonhwa Jeong
Zaoxing Liu
Tushar Krishna
Tuo Zhao
    MQ
ArXivPDFHTML

Papers citing "GEAR: An Efficient KV Cache Compression Recipe for Near-Lossless Generative Inference of LLM"

13 / 63 papers shown
Title
ZipCache: Accurate and Efficient KV Cache Quantization with Salient
  Token Identification
ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification
Yefei He
Luoming Zhang
Weijia Wu
Jing Liu
Hong Zhou
Bohan Zhuang
MQ
51
25
0
23 May 2024
Critical Infrastructure Protection: Generative AI, Challenges, and
  Opportunities
Critical Infrastructure Protection: Generative AI, Challenges, and Opportunities
Yagmur Yigit
M. Ferrag
Iqbal H. Sarker
Leandros A. Maglaras
Christos Chrysoulas
Naghmeh Moradpoor
Helge Janicke
27
6
0
08 May 2024
Efficient LLM Inference with Kcache
Efficient LLM Inference with Kcache
Qiaozhi He
Zhihua Wu
RALM
38
1
0
28 Apr 2024
Retrieval Head Mechanistically Explains Long-Context Factuality
Retrieval Head Mechanistically Explains Long-Context Factuality
Wenhao Wu
Yizhong Wang
Guangxuan Xiao
Hao-Chun Peng
Yao Fu
LRM
43
65
0
24 Apr 2024
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
150
319
0
21 Mar 2024
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient
  Fine-Tuning of Large Models
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models
Zeyu Liu
Souvik Kundu
Anni Li
Junrui Wan
Lianghao Jiang
P. Beerel
39
9
0
20 Mar 2024
Keyformer: KV Cache Reduction through Key Tokens Selection for Efficient
  Generative Inference
Keyformer: KV Cache Reduction through Key Tokens Selection for Efficient Generative Inference
Muhammad Adnan
Akhil Arunkumar
Gaurav Jain
Prashant J. Nair
Ilya Soloveychik
Purushotham Kamath
41
52
0
14 Mar 2024
The Faiss library
The Faiss library
Matthijs Douze
Alexandr Guzhva
Chengqi Deng
Jeff Johnson
Gergely Szilvasy
Pierre-Emmanuel Mazaré
Maria Lomeli
Lucas Hosseini
Hervé Jégou
46
147
0
16 Jan 2024
Transformers are Multi-State RNNs
Transformers are Multi-State RNNs
Matanel Oren
Michael Hassid
Nir Yarden
Yossi Adi
Roy Schwartz
OffRL
32
36
0
11 Jan 2024
SGLang: Efficient Execution of Structured Language Model Programs
SGLang: Efficient Execution of Structured Language Model Programs
Lianmin Zheng
Liangsheng Yin
Zhiqiang Xie
Chuyue Sun
Jeff Huang
...
Christos Kozyrakis
Ion Stoica
Joseph E. Gonzalez
Clark W. Barrett
Ying Sheng
LRM
42
117
0
12 Dec 2023
ReLU Strikes Back: Exploiting Activation Sparsity in Large Language
  Models
ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models
Iman Mirzadeh
Keivan Alizadeh-Vahid
Sachin Mehta
C. C. D. Mundo
Oncel Tuzel
Golnoosh Samei
Mohammad Rastegari
Mehrdad Farajtabar
126
61
0
06 Oct 2023
FlexGen: High-Throughput Generative Inference of Large Language Models
  with a Single GPU
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU
Ying Sheng
Lianmin Zheng
Binhang Yuan
Zhuohan Li
Max Ryabinin
...
Joseph E. Gonzalez
Percy Liang
Christopher Ré
Ion Stoica
Ce Zhang
149
372
0
13 Mar 2023
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
447
8,650
0
28 Jan 2022
Previous
12