ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.06180
  4. Cited By
Efficient Memory Management for Large Language Model Serving with
  PagedAttention

Efficient Memory Management for Large Language Model Serving with PagedAttention

12 September 2023
Woosuk Kwon
Zhuohan Li
Siyuan Zhuang
Ying Sheng
Lianmin Zheng
Cody Hao Yu
Joseph E. Gonzalez
Haotong Zhang
Ion Stoica
    VLM
ArXivPDFHTML

Papers citing "Efficient Memory Management for Large Language Model Serving with PagedAttention"

50 / 412 papers shown
Title
QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving
QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving
Chengyue Wu
Haotian Tang
Shang Yang
Zhekai Zhang
Guangxuan Xiao
Chuang Gan
Song Han
90
77
0
07 May 2024
Large Language Models (LLMs) as Agents for Augmented Democracy
Large Language Models (LLMs) as Agents for Augmented Democracy
Jairo Gudiño-Rosero
Umberto Grandi
César A. Hidalgo
LLMAG
37
127
0
06 May 2024
Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU
  Heterogeneity
Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity
Tyler Griggs
Xiaoxuan Liu
Jiaxiang Yu
Doyoung Kim
Wei-Lin Chiang
Alvin Cheung
Ion Stoica
54
16
0
22 Apr 2024
RAGCache: Efficient Knowledge Caching for Retrieval-Augmented Generation
RAGCache: Efficient Knowledge Caching for Retrieval-Augmented Generation
Chao Jin
Zili Zhang
Xuanlin Jiang
Fangyue Liu
Xin Liu
Xuanzhe Liu
Xin Jin
44
41
0
18 Apr 2024
Rho-1: Not All Tokens Are What You Need
Rho-1: Not All Tokens Are What You Need
Zheng-Wen Lin
Zhibin Gou
Yeyun Gong
Xiao Liu
Yelong Shen
...
Chen Lin
Yujiu Yang
Jian Jiao
Nan Duan
Weizhu Chen
CLL
50
57
0
11 Apr 2024
SqueezeAttention: 2D Management of KV-Cache in LLM Inference via
  Layer-wise Optimal Budget
SqueezeAttention: 2D Management of KV-Cache in LLM Inference via Layer-wise Optimal Budget
Zihao Wang
Shaoduo Gan
37
6
0
07 Apr 2024
Towards Pareto Optimal Throughput in Small Language Model Serving
Towards Pareto Optimal Throughput in Small Language Model Serving
Pol G. Recasens
Yue Zhu
Chen Wang
Eun Kyung Lee
Olivier Tardieu
Alaa Youssef
Jordi Torres
Josep Ll. Berral
40
4
0
04 Apr 2024
CodeEditorBench: Evaluating Code Editing Capability of Large Language Models
CodeEditorBench: Evaluating Code Editing Capability of Large Language Models
Jiawei Guo
Ziming Li
Xueling Liu
Kaijing Ma
Tianyu Zheng
...
Xingwei Qu
Xiang Yue
Ge Zhang
Wenhu Chen
Jie Fu
KELM
59
12
0
04 Apr 2024
Linear Attention Sequence Parallelism
Linear Attention Sequence Parallelism
Weigao Sun
Zhen Qin
Dong Li
Xuyang Shen
Yu Qiao
Yiran Zhong
76
2
0
03 Apr 2024
Evalverse: Unified and Accessible Library for Large Language Model
  Evaluation
Evalverse: Unified and Accessible Library for Large Language Model Evaluation
Jihoo Kim
Wonho Song
Dahyun Kim
Yunsu Kim
Yungi Kim
Chanjun Park
ELM
71
3
0
01 Apr 2024
Instruction-Driven Game Engines on Large Language Models
Instruction-Driven Game Engines on Large Language Models
Hongqiu Wu
Xing-Chen Liu
Haizhen Zhao
Min Zhang
44
1
0
30 Mar 2024
DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference
DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference
Jinwei Yao
Kaiqi Chen
Kexun Zhang
Jiaxuan You
Binhang Yuan
Zeke Wang
Tao Lin
48
2
0
30 Mar 2024
FastDecode: High-Throughput GPU-Efficient LLM Serving using
  Heterogeneous Pipelines
FastDecode: High-Throughput GPU-Efficient LLM Serving using Heterogeneous Pipelines
Jiaao He
Jidong Zhai
45
27
0
18 Mar 2024
NovelQA: Benchmarking Question Answering on Documents Exceeding 200K Tokens
NovelQA: Benchmarking Question Answering on Documents Exceeding 200K Tokens
Cunxiang Wang
Ruoxi Ning
Boqi Pan
Tonghui Wu
Qipeng Guo
...
Guangsheng Bao
Xiangkun Hu
Zheng Zhang
Qian Wang
Yue Zhang
RALM
106
4
0
18 Mar 2024
Yi: Open Foundation Models by 01.AI
Yi: Open Foundation Models by 01.AI
01. AI
Alex Young
01.AI Alex Young
Bei Chen
Chao Li
...
Yue Wang
Yuxuan Cai
Zhenyu Gu
Zhiyuan Liu
Zonghong Dai
OSLM
LRM
150
511
0
07 Mar 2024
HeteGen: Heterogeneous Parallel Inference for Large Language Models on
  Resource-Constrained Devices
HeteGen: Heterogeneous Parallel Inference for Large Language Models on Resource-Constrained Devices
Xuanlei Zhao
Bin Jia
Hao Zhou
Ziming Liu
Shenggan Cheng
Yang You
27
4
0
02 Mar 2024
FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning
FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning
Xupeng Miao
Gabriele Oliaro
Xinhao Cheng
Vineeth Kada
Ruohan Gao
...
April Yang
Yingcheng Wang
Mengdi Wu
Colin Unger
Zhihao Jia
MoE
94
9
0
29 Feb 2024
CLLMs: Consistency Large Language Models
CLLMs: Consistency Large Language Models
Siqi Kou
Lanxiang Hu
Zhe He
Zhijie Deng
Hao Zhang
52
28
0
28 Feb 2024
Arithmetic Control of LLMs for Diverse User Preferences: Directional
  Preference Alignment with Multi-Objective Rewards
Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards
Haoxiang Wang
Yong Lin
Wei Xiong
Rui Yang
Shizhe Diao
Shuang Qiu
Han Zhao
Tong Zhang
40
72
0
28 Feb 2024
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Shuming Ma
Hongyu Wang
Lingxiao Ma
Lei Wang
Wenhui Wang
Shaohan Huang
Lifeng Dong
Ruiping Wang
Jilong Xue
Furu Wei
MQ
45
208
0
27 Feb 2024
AmbigNLG: Addressing Task Ambiguity in Instruction for NLG
AmbigNLG: Addressing Task Ambiguity in Instruction for NLG
Ayana Niwa
Hayate Iso
36
4
0
27 Feb 2024
GenAINet: Enabling Wireless Collective Intelligence via Knowledge Transfer and Reasoning
GenAINet: Enabling Wireless Collective Intelligence via Knowledge Transfer and Reasoning
Han Zou
Qiyang Zhao
Lina Bariah
Yu Tian
M. Bennis
S. Lasaulce
101
12
0
26 Feb 2024
Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language Models
Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language Models
Guanming Xiong
Junwei Bao
Wen Zhao
KELM
64
10
0
23 Feb 2024
RelayAttention for Efficient Large Language Model Serving with Long
  System Prompts
RelayAttention for Efficient Large Language Model Serving with Long System Prompts
Lei Zhu
Xinjiang Wang
Wayne Zhang
Rynson W. H. Lau
33
6
0
22 Feb 2024
Is Open-Source There Yet? A Comparative Study on Commercial and
  Open-Source LLMs in Their Ability to Label Chest X-Ray Reports
Is Open-Source There Yet? A Comparative Study on Commercial and Open-Source LLMs in Their Ability to Label Chest X-Ray Reports
Felix J. Dorfner
Liv Jürgensen
Leonhard Donle
Fares Al Mohamad
Tobias R. Bodenmann
...
Thomas Schultz
Albert E. Kim
J. Merkow
Keno K. Bressem
Christopher P. Bridge
LM&MA
VLM
33
7
0
19 Feb 2024
MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific
  Data Visualization
MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific Data Visualization
Zhiyu Yang
Zihan Zhou
Shuo Wang
Xin Cong
Xu Han
...
Pengyuan Liu
Dong Yu
Zhiyuan Liu
Xiaodong Shi
Maosong Sun
LLMAG
39
28
0
18 Feb 2024
Natural Language Reinforcement Learning
Natural Language Reinforcement Learning
Xidong Feng
Bo Liu
Mengyue Yang
Ziyan Wang
Girish A. Koushiks
Yali Du
Ying Wen
Jun Wang
OffRL
35
3
0
11 Feb 2024
WebLINX: Real-World Website Navigation with Multi-Turn Dialogue
WebLINX: Real-World Website Navigation with Multi-Turn Dialogue
Xing Han Lù
Zdeněk Kasner
Siva Reddy
34
60
0
08 Feb 2024
Hydra: Sequentially-Dependent Draft Heads for Medusa Decoding
Hydra: Sequentially-Dependent Draft Heads for Medusa Decoding
Zack Ankner
Rishab Parthasarathy
Aniruddha Nrusimha
Christopher Rinard
Jonathan Ragan-Kelley
William Brandon
34
26
0
07 Feb 2024
ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse
  LLMs
ReLU2^22 Wins: Discovering Efficient Activation Functions for Sparse LLMs
Zhengyan Zhang
Yixin Song
Guanghui Yu
Xu Han
Yankai Lin
Chaojun Xiao
Chenyang Song
Zhiyuan Liu
Zeyu Mi
Maosong Sun
24
31
0
06 Feb 2024
The Landscape and Challenges of HPC Research and LLMs
The Landscape and Challenges of HPC Research and LLMs
Le Chen
Nesreen K. Ahmed
Akashnil Dutta
Arijit Bhattacharjee
Sixing Yu
...
Vy A. Vo
J. P. Muñoz
Ted Willke
Tim Mattson
Ali Jannesari
AI4CE
48
20
0
03 Feb 2024
Faster and Lighter LLMs: A Survey on Current Challenges and Way Forward
Faster and Lighter LLMs: A Survey on Current Challenges and Way Forward
Arnav Chavan
Raghav Magazine
Shubham Kushwaha
M. Debbah
Deepak Gupta
23
18
0
02 Feb 2024
Decoding Speculative Decoding
Decoding Speculative Decoding
Minghao Yan
Saurabh Agarwal
Shivaram Venkataraman
LRM
42
6
0
02 Feb 2024
LLM-based NLG Evaluation: Current Status and Challenges
LLM-based NLG Evaluation: Current Status and Challenges
Mingqi Gao
Xinyu Hu
Jie Ruan
Xiao Pu
Xiaojun Wan
ELM
LM&MA
71
30
0
02 Feb 2024
RE-GAINS & EnChAnT: Intelligent Tool Manipulation Systems For Enhanced
  Query Responses
RE-GAINS & EnChAnT: Intelligent Tool Manipulation Systems For Enhanced Query Responses
Sahil Girhepuje
Siva Sankar Sajeev
Purvam Jain
Arya Sikder
Adithya Rama Varma
Ryan George
Akshay Govind Srinivasan
Mahendra Kurup
Ashmit Sinha
Sudip Mondal
RALM
37
0
0
28 Jan 2024
Improving Medical Reasoning through Retrieval and Self-Reflection with
  Retrieval-Augmented Large Language Models
Improving Medical Reasoning through Retrieval and Self-Reflection with Retrieval-Augmented Large Language Models
Minbyul Jeong
Jiwoong Sohn
Mujeen Sung
Jaewoo Kang
25
29
0
27 Jan 2024
AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents
AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents
Chang Ma
Junlei Zhang
Zhihao Zhu
Cheng Yang
Yujiu Yang
Yaohui Jin
Zhenzhong Lan
Lingpeng Kong
Junxian He
ELM
LLMAG
37
60
0
24 Jan 2024
BiTA: Bi-Directional Tuning for Lossless Acceleration in Large Language
  Models
BiTA: Bi-Directional Tuning for Lossless Acceleration in Large Language Models
Feng-Huei Lin
Hanling Yi
Hongbin Li
Yifan Yang
Xiaotian Yu
Guangming Lu
Rong Xiao
41
3
0
23 Jan 2024
Medusa: Simple LLM Inference Acceleration Framework with Multiple
  Decoding Heads
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
Tianle Cai
Yuhong Li
Zhengyang Geng
Hongwu Peng
Jason D. Lee
De-huai Chen
Tri Dao
60
257
0
19 Jan 2024
Unlocking Efficiency in Large Language Model Inference: A Comprehensive
  Survey of Speculative Decoding
Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding
Heming Xia
Zhe Yang
Qingxiu Dong
Peiyi Wang
Yongqi Li
Tao Ge
Tianyu Liu
Wenjie Li
Zhifang Sui
LRM
38
105
0
15 Jan 2024
FFSplit: Split Feed-Forward Network For Optimizing Accuracy-Efficiency
  Trade-off in Language Model Inference
FFSplit: Split Feed-Forward Network For Optimizing Accuracy-Efficiency Trade-off in Language Model Inference
Zirui Liu
Qingquan Song
Q. Xiao
Sathiya Keerthi Selvaraj
Rahul Mazumder
Aman Gupta
Xia Hu
42
4
0
08 Jan 2024
KnowledgeNavigator: Leveraging Large Language Models for Enhanced
  Reasoning over Knowledge Graph
KnowledgeNavigator: Leveraging Large Language Models for Enhanced Reasoning over Knowledge Graph
Tiezheng Guo
Qingwen Yang
Chen Wang
Yanyi Liu
Pan Li
Jiawei Tang
Dapeng Li
Yingyou Wen
RALM
LRM
39
22
0
26 Dec 2023
KwaiAgents: Generalized Information-seeking Agent System with Large
  Language Models
KwaiAgents: Generalized Information-seeking Agent System with Large Language Models
Haojie Pan
Zepeng Zhai
Hao Yuan
Yaojia Lv
Ruiji Fu
Ming Liu
Zhongyuan Wang
Bing Qin
LLMAG
RALM
26
10
0
08 Dec 2023
Splitwise: Efficient generative LLM inference using phase splitting
Splitwise: Efficient generative LLM inference using phase splitting
Pratyush Patel
Esha Choukse
Chaojie Zhang
Aashaka Shah
Íñigo Goiri
Saeed Maleki
Ricardo Bianchini
61
203
0
30 Nov 2023
PrivateLoRA For Efficient Privacy Preserving LLM
PrivateLoRA For Efficient Privacy Preserving LLM
Yiming Wang
Yu Lin
Xiaodong Zeng
Guannan Zhang
66
11
0
23 Nov 2023
Examining LLMs' Uncertainty Expression Towards Questions Outside
  Parametric Knowledge
Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge
Genglin Liu
Xingyao Wang
Lifan Yuan
Yangyi Chen
Hao Peng
34
16
0
16 Nov 2023
TextEE: Benchmark, Reevaluation, Reflections, and Future Challenges in
  Event Extraction
TextEE: Benchmark, Reevaluation, Reflections, and Future Challenges in Event Extraction
Kuan-Hao Huang
I-Hung Hsu
Tanmay Parekh
Zhiyu Xie
Zixuan Zhang
Premkumar Natarajan
Kai-Wei Chang
Nanyun Peng
Heng Ji
32
16
0
16 Nov 2023
Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs
Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs
Sen Yang
Xin Li
Leyang Cui
Li Bing
Wai Lam
LRM
NAI
39
15
0
16 Nov 2023
Prompt Cache: Modular Attention Reuse for Low-Latency Inference
Prompt Cache: Modular Attention Reuse for Low-Latency Inference
In Gim
Guojun Chen
Seung-seob Lee
Nikhil Sarda
Anurag Khandelwal
Lin Zhong
42
77
0
07 Nov 2023
Relax: Composable Abstractions for End-to-End Dynamic Machine Learning
Relax: Composable Abstractions for End-to-End Dynamic Machine Learning
Ruihang Lai
Junru Shao
Siyuan Feng
Steven Lyubomirsky
Bohan Hou
...
Sunghyun Park
Prakalp Srivastava
Jared Roesch
T. Mowry
Tianqi Chen
47
9
0
01 Nov 2023
Previous
123456789
Next