ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.15332
  4. Cited By
EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models

EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models

20 October 2024
Junhao Hu
Wenrui Huang
Haoran Wang
Weidong Wang
Tiancheng Hu
Qin Zhang
Hao Feng
Xusheng Chen
Yizhou Shan
Tao Xie
    RALM
    LLMAG
ArXivPDFHTML

Papers citing "EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models"

3 / 3 papers shown
Title
Semantic Caching of Contextual Summaries for Efficient Question-Answering with Language Models
Semantic Caching of Contextual Summaries for Efficient Question-Answering with Language Models
Camille Couturier
Spyros Mastorakis
Haiying Shen
Saravan Rajmohan
Victor Rühle
KELM
14
0
0
16 May 2025
From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs
From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs
Yaxiong Wu
Sheng Liang
Chen Zhang
Y. Wang
Yuhang Zhang
Huifeng Guo
Ruiming Tang
Y. Liu
KELM
42
1
0
22 Apr 2025
Auditing Prompt Caching in Language Model APIs
Auditing Prompt Caching in Language Model APIs
Chenchen Gu
Xiang Lisa Li
Rohith Kuditipudi
Percy Liang
Tatsunori Hashimoto
76
0
0
11 Feb 2025
1