Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.15176
Cited By
ReAttention: Training-Free Infinite Context with Finite Attention Scope
21 July 2024
Xiaoran Liu
Ruixiao Li
Yuerong Song
Zhigeng Liu
Kai Lv
Hang Yan
Hang Yan
Linlin Li
Qun Liu
Xipeng Qiu
LLMAG
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ReAttention: Training-Free Infinite Context with Finite Attention Scope"
3 / 3 papers shown
Title
BGE Landmark Embedding: A Chunking-Free Embedding Method For Retrieval Augmented Long-Context Large Language Models
Kun Luo
Zheng Liu
Shitao Xiao
Kang Liu
39
11
0
18 Feb 2024
LongHeads: Multi-Head Attention is Secretly a Long Context Processor
Yi Lu
Xin Zhou
Wei He
Jun Zhao
Tao Ji
Tao Gui
Qi Zhang
Xuanjing Huang
LLMAG
50
11
0
16 Feb 2024
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
253
698
0
27 Aug 2021
1