ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.02842
  4. Cited By
IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs

IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs

5 May 2024
Yuzhen Mao
Martin Ester
Ke Li
ArXivPDFHTML

Papers citing "IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs"

5 / 5 papers shown
Title
ZETA: Leveraging Z-order Curves for Efficient Top-k Attention
ZETA: Leveraging Z-order Curves for Efficient Top-k Attention
Qiuhao Zeng
Jerry Huang
Peng Lu
Gezheng Xu
Boxing Chen
Charles Ling
Boyu Wang
51
1
0
24 Jan 2025
RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
Di Liu
Meng Chen
Baotong Lu
Huiqiang Jiang
Zhenhua Han
...
Kaipeng Zhang
Chong Chen
Fan Yang
Yuqing Yang
Lili Qiu
54
30
0
03 Jan 2025
MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via
  Dynamic Sparse Attention
MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Huiqiang Jiang
Yucheng Li
Chengruidong Zhang
Qianhui Wu
Xufang Luo
...
Amir H. Abdi
Dongsheng Li
Chin-Yew Lin
Yuqing Yang
L. Qiu
72
84
0
02 Jul 2024
SparQ Attention: Bandwidth-Efficient LLM Inference
SparQ Attention: Bandwidth-Efficient LLM Inference
Luka Ribar
Ivan Chelombiev
Luke Hudlass-Galley
Charlie Blake
Carlo Luschi
Douglas Orr
29
46
0
08 Dec 2023
H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for
  Sequences
H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences
Zhenhai Zhu
Radu Soricut
112
41
0
25 Jul 2021
1