ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.04466
  4. Cited By
Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective
v1v2v3v4 (latest)

Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective

6 October 2024
Jinhao Li
Jiaming Xu
Shan Huang
Yonghua Chen
Wen Li
Jun Liu
Yaoxiu Lian
Jiayi Pan
Li Ding
Hao Zhou
Yu Wang
Guohao Dai
ArXiv (abs)PDFHTML

Papers citing "Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective"

50 / 136 papers shown
Title
EdgeWisePersona: A Dataset for On-Device User Profiling from Natural Language Interactions
EdgeWisePersona: A Dataset for On-Device User Profiling from Natural Language Interactions
Patryk Bartkowiak
Michal Podstawski
78
0
0
16 May 2025
Token Level Routing Inference System for Edge Devices
Token Level Routing Inference System for Edge Devices
Jianshu She
Wenhao Zheng
Zhengzhong Liu
Hongyi Wang
Eric P. Xing
Huaxiu Yao
Qirong Ho
71
1
0
10 Apr 2025
RAGO: Systematic Performance Optimization for Retrieval-Augmented Generation Serving
RAGO: Systematic Performance Optimization for Retrieval-Augmented Generation Serving
Wenqi Jiang
Suvinay Subramanian
Cat Graves
Gustavo Alonso
Amir Yazdanbakhsh
Vidushi Dadu
132
4
0
18 Mar 2025
ROMA: a Read-Only-Memory-based Accelerator for QLoRA-based On-Device LLM
ROMA: a Read-Only-Memory-based Accelerator for QLoRA-based On-Device LLM
Wenqiang Wang
Yijia Zhang
Zikai Zhang
Guanting Huo
Hao Liang
Shijie Cao
Ningyi Xu
128
0
0
17 Mar 2025
Changing Base Without Losing Pace: A GPU-Efficient Alternative to MatMul in DNNs
Changing Base Without Losing Pace: A GPU-Efficient Alternative to MatMul in DNNs
Nir Ailon
Akhiad Bercovich
Omri Weinstein
128
0
0
15 Mar 2025
Dynamic Parallel Tree Search for Efficient LLM Reasoning
Dynamic Parallel Tree Search for Efficient LLM Reasoning
Yifu Ding
Wentao Jiang
Shunyu Liu
Yongcheng Jing
Jinpei Guo
...
Zengmao Wang
Ziqiang Liu
Di Lin
Xianglong Liu
Dacheng Tao
LRM
119
11
0
22 Feb 2025
Fast Matrix Multiplications for Lookup Table-Quantized LLMs
Fast Matrix Multiplications for Lookup Table-Quantized LLMs
Han Guo
William Brandon
Radostin Cholakov
Jonathan Ragan-Kelley
Eric P. Xing
Yoon Kim
MQ
169
16
0
20 Jan 2025
SoftmAP: Software-Hardware Co-design for Integer-Only Softmax on
  Associative Processors
SoftmAP: Software-Hardware Co-design for Integer-Only Softmax on Associative Processors
M. Rakka
Jiajian Li
Guohao Dai
A. Eltawil
M. Fouda
Fadi J. Kurdahi
110
1
0
26 Nov 2024
MARLIN: Multi-Agent Reinforcement Learning Guided by Language-Based Inter-Robot Negotiation
MARLIN: Multi-Agent Reinforcement Learning Guided by Language-Based Inter-Robot Negotiation
Toby Godfrey
William Hunt
Mohammad D. Soorati
152
1
0
18 Oct 2024
ClickAgent: Enhancing UI Location Capabilities of Autonomous Agents
ClickAgent: Enhancing UI Location Capabilities of Autonomous Agents
Jakub Hoscilowicz
Bartosz Maj
Bartosz Kozakiewicz
Oleksii Tymoshchuk
Artur Janicki
LLMAG
105
6
0
09 Oct 2024
MARCA: Mamba Accelerator with ReConfigurable Architecture
MARCA: Mamba Accelerator with ReConfigurable Architecture
Jinhao Li
Shan Huang
Jiaming Xu
Jun Liu
Li Ding
Ningyi Xu
Guohao Dai
78
9
0
16 Sep 2024
Hardware Acceleration of LLMs: A comprehensive survey and comparison
Hardware Acceleration of LLMs: A comprehensive survey and comparison
Nikoletta Koilia
C. Kachris
97
5
0
05 Sep 2024
Designing Efficient LLM Accelerators for Edge Devices
Designing Efficient LLM Accelerators for Edge Devices
Jude Haris
Rappy Saha
Wenhao Hu
José Cano
98
7
0
01 Aug 2024
Gemma 2: Improving Open Language Models at a Practical Size
Gemma 2: Improving Open Language Models at a Practical Size
Gemma Team
Gemma Team Morgane Riviere
Shreya Pathak
Pier Giuseppe Sessa
Cassidy Hardin
...
Noah Fiedel
Armand Joulin
Kathleen Kenealy
Robert Dadashi
Alek Andreev
VLMMoEOSLM
143
919
0
31 Jul 2024
Qwen2 Technical Report
Qwen2 Technical Report
An Yang
Baosong Yang
Binyuan Hui
Jian Xu
Bowen Yu
...
Yuqiong Liu
Zeyu Cui
Zhenru Zhang
Zhifang Guo
Zhi-Wei Fan
OSLMVLMMU
215
987
0
15 Jul 2024
Inference Performance Optimization for Large Language Models on CPUs
Inference Performance Optimization for Large Language Models on CPUs
Pujiang He
Shan Zhou
Wenhuan Huang
Changqing Li
Duyi Wang
Bin Guo
Chen Meng
Sheng Gui
Weifei Yu
Yi Xie
60
4
0
10 Jul 2024
Mobile Edge Intelligence for Large Language Models: A Contemporary Survey
Mobile Edge Intelligence for Large Language Models: A Contemporary Survey
Guanqiao Qu
Qiyuan Chen
Wei Wei
Zheng Lin
Xianhao Chen
Kaibin Huang
150
56
0
09 Jul 2024
T-MAC: CPU Renaissance via Table Lookup for Low-Bit LLM Deployment on Edge
T-MAC: CPU Renaissance via Table Lookup for Low-Bit LLM Deployment on Edge
Jianyu Wei
Shijie Cao
Ting Cao
Lingxiao Ma
Lei Wang
Yanyong Zhang
Mao Yang
MQ
140
13
0
25 Jun 2024
EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees
EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees
Yuhui Li
Fangyun Wei
Chao Zhang
Hongyang R. Zhang
146
86
0
24 Jun 2024
ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All
  Tools
ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools
Team GLM
:
Aohan Zeng
Bin Xu
Bowen Wang
...
Zhaoyu Wang
Zhen Yang
Zhengxiao Du
Zhenyu Hou
Zihan Wang
ALM
140
648
0
18 Jun 2024
Tender: Accelerating Large Language Models via Tensor Decomposition and
  Runtime Requantization
Tender: Accelerating Large Language Models via Tensor Decomposition and Runtime Requantization
Jungi Lee
Wonbeom Lee
Jaewoong Sim
MQ
90
14
0
16 Jun 2024
Memory Is All You Need: An Overview of Compute-in-Memory Architectures
  for Accelerating Large Language Model Inference
Memory Is All You Need: An Overview of Compute-in-Memory Architectures for Accelerating Large Language Model Inference
Christopher Wolters
Xiaoxuan Yang
Ulf Schlichtmann
Toyotaro Suzumura
89
15
0
12 Jun 2024
PowerInfer-2: Fast Large Language Model Inference on a Smartphone
PowerInfer-2: Fast Large Language Model Inference on a Smartphone
Zhenliang Xue
Yixin Song
Zeyu Mi
Le Chen
Yubin Xia
Haibo Chen
117
52
0
10 Jun 2024
Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated
  Parameters
Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters
Yixin Song
Haotong Xie
Zhengyan Zhang
Bo Wen
Li Ma
Zeyu Mi
Haibo Chen
MoE
159
25
0
10 Jun 2024
Transformers are SSMs: Generalized Models and Efficient Algorithms
  Through Structured State Space Duality
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
Tri Dao
Albert Gu
Mamba
122
539
0
31 May 2024
MoNDE: Mixture of Near-Data Experts for Large-Scale Sparse Models
MoNDE: Mixture of Near-Data Experts for Large-Scale Sparse Models
Taehyun Kim
Kwanseok Choi
Youngmock Cho
Jaehoon Cho
Hyukzae Lee
Jaewoong Sim
MoE
47
6
0
29 May 2024
Distributed Inference Performance Optimization for LLMs on CPUs
Distributed Inference Performance Optimization for LLMs on CPUs
Pujiang He
Shan Zhou
Changqing Li
Wenhuan Huang
Weifei Yu
Duyi Wang
Chen Meng
Sheng Gui
94
1
0
16 May 2024
Enabling High-Sparsity Foundational Llama Models with Efficient
  Pretraining and Deployment
Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment
Abhinav Agarwalla
Abhay Gupta
Alexandre Marques
Shubhra Pandit
Michael Goin
...
Tuan Nguyen
Mahmoud Salem
Dan Alistarh
Sean Lie
Mark Kurtz
MoESyDa
136
11
0
06 May 2024
HLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level
  Synthesis
HLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level Synthesis
Andy He
Darren Key
Mason Bulling
Andrew Chang
Skyler Shapiro
Everett Lee
68
1
0
29 Apr 2024
Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting
Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting
Fangcheng Liu
Yehui Tang
Zhenhua Liu
Yunsheng Ni
Kai Han
Yunhe Wang
104
29
0
29 Apr 2024
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Mostafa Elhoushi
Akshat Shrivastava
Diana Liskovich
Basil Hosmer
Bram Wasti
...
Saurabh Agarwal
Ahmed Roman
Ahmed Aly
Beidi Chen
Carole-Jean Wu
LRM
105
110
0
25 Apr 2024
A Survey on Efficient Inference for Large Language Models
A Survey on Efficient Inference for Large Language Models
Zixuan Zhou
Xuefei Ning
Ke Hong
Tianyu Fu
Jiaming Xu
...
Shengen Yan
Guohao Dai
Xiao-Ping Zhang
Yuhan Dong
Yu Wang
153
98
0
22 Apr 2024
Megalodon: Efficient LLM Pretraining and Inference with Unlimited
  Context Length
Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length
Xuezhe Ma
Xiaomeng Yang
Wenhan Xiong
Beidi Chen
Lili Yu
Hao Zhang
Jonathan May
Luke Zettlemoyer
Omer Levy
Chunting Zhou
79
33
0
12 Apr 2024
Leave No Context Behind: Efficient Infinite Context Transformers with
  Infini-attention
Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention
Tsendsuren Munkhdalai
Manaal Faruqui
Siddharth Gopal
LRMLLMAGCLL
157
120
0
10 Apr 2024
Large Language Model (LLM) AI text generation detection based on
  transformer deep learning algorithm
Large Language Model (LLM) AI text generation detection based on transformer deep learning algorithm
Yuhong Mo
Hao Qin
Yushan Dong
Ziyi Zhu
Zhenglin Li
DeLMO
67
48
0
06 Apr 2024
Mixture-of-Depths: Dynamically allocating compute in transformer-based
  language models
Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
David Raposo
Sam Ritter
Blake A. Richards
Timothy Lillicrap
Peter C. Humphreys
Adam Santoro
MoE
124
88
0
02 Apr 2024
Balanced Data Placement for GEMV Acceleration with Processing-In-Memory
Balanced Data Placement for GEMV Acceleration with Processing-In-Memory
M. Ibrahim
Mahzabeen Islam
Shaizeen Aga
92
2
0
29 Mar 2024
DiJiang: Efficient Large Language Models through Compact Kernelization
DiJiang: Efficient Large Language Models through Compact Kernelization
Hanting Chen
Zhicheng Liu
Xutao Wang
Yuchuan Tian
Yunhe Wang
VLM
77
5
0
29 Mar 2024
Jamba: A Hybrid Transformer-Mamba Language Model
Jamba: A Hybrid Transformer-Mamba Language Model
Opher Lieber
Barak Lenz
Hofit Bata
Gal Cohen
Jhonathan Osin
...
Nir Ratner
N. Rozen
Erez Shwartz
Mor Zusman
Y. Shoham
111
227
0
28 Mar 2024
Not All Layers of LLMs Are Necessary During Inference
Not All Layers of LLMs Are Necessary During Inference
Siqi Fan
Xin Jiang
Xiang Li
Xuying Meng
Peng Han
Shuo Shang
Aixin Sun
Yequan Wang
Zhongyuan Wang
119
44
0
04 Mar 2024
NoMAD-Attention: Efficient LLM Inference on CPUs Through
  Multiply-add-free Attention
NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention
Tianyi Zhang
Jonah Yi
Bowen Yao
Zhaozhuo Xu
Anshumali Shrivastava
MQ
94
7
0
02 Mar 2024
Griffin: Mixing Gated Linear Recurrences with Local Attention for
  Efficient Language Models
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Soham De
Samuel L. Smith
Anushan Fernando
Aleksandar Botev
George-Christian Muraru
...
David Budden
Yee Whye Teh
Razvan Pascanu
Nando de Freitas
Çağlar Gülçehre
Mamba
130
135
0
29 Feb 2024
APTQ: Attention-aware Post-Training Mixed-Precision Quantization for
  Large Language Models
APTQ: Attention-aware Post-Training Mixed-Precision Quantization for Large Language Models
Ziyi Guan
Hantao Huang
Yupeng Su
Hong Huang
Ngai Wong
Hao Yu
MQ
89
16
0
21 Feb 2024
ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity
  within Large Language Models
ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models
Chenyang Song
Xu Han
Zhengyan Zhang
Shengding Hu
Xiyu Shi
...
Chen Chen
Zhiyuan Liu
Guanglin Li
Tao Yang
Maosong Sun
154
31
0
21 Feb 2024
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding
Zhuoming Chen
Avner May
Ruslan Svirschevski
Yuhsun Huang
Max Ryabinin
Zhihao Jia
Beidi Chen
99
51
0
19 Feb 2024
Break the Sequential Dependency of LLM Inference Using Lookahead
  Decoding
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
Yichao Fu
Peter Bailis
Ion Stoica
Hao Zhang
200
164
0
03 Feb 2024
ConSmax: Hardware-Friendly Alternative Softmax with Learnable Parameters
ConSmax: Hardware-Friendly Alternative Softmax with Learnable Parameters
Shiwei Liu
Guanchen Tao
Yifei Zou
Derek Chow
Zichen Fan
Kauna Lei
Bangfei Pan
Dennis Sylvester
Gregory Kielian
Mehdi Saligane
59
8
0
31 Jan 2024
A Comprehensive Survey of Compression Algorithms for Language Models
A Comprehensive Survey of Compression Algorithms for Language Models
Seungcheol Park
Jaehyeon Choi
Sojin Lee
U. Kang
MQ
116
16
0
27 Jan 2024
EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty
EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty
Yuhui Li
Fangyun Wei
Chao Zhang
Hongyang R. Zhang
144
165
0
26 Jan 2024
FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric
  Algorithm-System Co-Design
FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design
Haojun Xia
Zhen Zheng
Xiaoxia Wu
Shiyang Chen
Zhewei Yao
...
Donglin Zhuang
Zhongzhu Zhou
Olatunji Ruwase
Yuxiong He
Shuaiwen Leon Song
MQ
91
15
0
25 Jan 2024
123
Next