ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.15176
  4. Cited By
ReAttention: Training-Free Infinite Context with Finite Attention Scope

ReAttention: Training-Free Infinite Context with Finite Attention Scope

21 July 2024
Xiaoran Liu
Ruixiao Li
Yuerong Song
Zhigeng Liu
Kai Lv
Hang Yan
Hang Yan
Linlin Li
Qun Liu
Xipeng Qiu
    LLMAG
ArXivPDFHTML

Papers citing "ReAttention: Training-Free Infinite Context with Finite Attention Scope"

3 / 3 papers shown
Title
BGE Landmark Embedding: A Chunking-Free Embedding Method For Retrieval
  Augmented Long-Context Large Language Models
BGE Landmark Embedding: A Chunking-Free Embedding Method For Retrieval Augmented Long-Context Large Language Models
Kun Luo
Zheng Liu
Shitao Xiao
Kang Liu
39
11
0
18 Feb 2024
LongHeads: Multi-Head Attention is Secretly a Long Context Processor
LongHeads: Multi-Head Attention is Secretly a Long Context Processor
Yi Lu
Xin Zhou
Wei He
Jun Zhao
Tao Ji
Tao Gui
Qi Zhang
Xuanjing Huang
LLMAG
50
11
0
16 Feb 2024
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
253
698
0
27 Aug 2021
1