ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.04797
  4. Cited By
Found in the Middle: How Language Models Use Long Contexts Better via
  Plug-and-Play Positional Encoding

Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding

5 March 2024
Zhenyu (Allen) Zhang
Runjin Chen
Shiwei Liu
Zhewei Yao
Olatunji Ruwase
Beidi Chen
Xiaoxia Wu
Zhangyang Wang
ArXivPDFHTML

Papers citing "Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding"

20 / 20 papers shown
Title
The Use of Gaze-Derived Confidence of Inferred Operator Intent in Adjusting Safety-Conscious Haptic Assistance
The Use of Gaze-Derived Confidence of Inferred Operator Intent in Adjusting Safety-Conscious Haptic Assistance
Jeremy D. Webb
Michael Bowman
Songpo Li
Xiaoli Zhang
34
0
0
04 Apr 2025
Lost-in-the-Middle in Long-Text Generation: Synthetic Dataset, Evaluation Framework, and Mitigation
Junhao Zhang
Richong Zhang
Fanshuang Kong
Ziyang Miao
Yanhan Ye
Yaowei Zheng
SyDa
46
0
0
10 Mar 2025
Layer-Specific Scaling of Positional Encodings for Superior Long-Context Modeling
Zhenghua Wang
Yiran Ding
Changze Lv
Zhibo Xu
Tianlong Li
Tianyuan Shi
Xiaoqing Zheng
Xuanjing Huang
43
0
0
06 Mar 2025
FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference
FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference
Xunhao Lai
Jianqiao Lu
Yao Luo
Yiyuan Ma
Xun Zhou
71
5
0
28 Feb 2025
END: Early Noise Dropping for Efficient and Effective Context Denoising
END: Early Noise Dropping for Efficient and Effective Context Denoising
Hongye Jin
Pei Chen
Jingfeng Yang
Zhaobo Wang
Meng Jiang
...
Xuzhi Zhang
Zheng Li
Tianyi Liu
Huasheng Li
Bing Yin
134
0
0
26 Feb 2025
Breaking the Stage Barrier: A Novel Single-Stage Approach to Long
  Context Extension for Large Language Models
Breaking the Stage Barrier: A Novel Single-Stage Approach to Long Context Extension for Large Language Models
Haoran Lian
Junmin Chen
Wei Huang
Yizhe Xiong
Wenping Hu
...
Hui Chen
Jianwei Niu
Zijia Lin
Fuzheng Zhang
Di Zhang
81
0
0
10 Dec 2024
DAPE V2: Process Attention Score as Feature Map for Length Extrapolation
DAPE V2: Process Attention Score as Feature Map for Length Extrapolation
Chuanyang Zheng
Yihang Gao
Han Shi
Jing Xiong
Jiankai Sun
...
Xiaozhe Ren
Michael Ng
Xin Jiang
Zhenguo Li
Yu Li
36
2
0
07 Oct 2024
PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances
  Retrieval-Augmented Generation with Zero Inference Overhead
PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances Retrieval-Augmented Generation with Zero Inference Overhead
Tao Tan
Yining Qian
Ang Lv
Hongzhan Lin
Songhao Wu
Yongbo Wang
Feng Wang
Jingtong Wu
Xin Lu
Rui Yan
22
1
0
29 Sep 2024
Towards LifeSpan Cognitive Systems
Towards LifeSpan Cognitive Systems
Yu Wang
Chi Han
Tongtong Wu
Xiaoxin He
Wangchunshu Zhou
...
Zexue He
Wei Wang
Gholamreza Haffari
Heng Ji
Julian McAuley
KELM
CLL
147
1
0
20 Sep 2024
From Artificial Needles to Real Haystacks: Improving Retrieval
  Capabilities in LLMs by Finetuning on Synthetic Data
From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data
Zheyang Xiong
Vasilis Papageorgiou
Kangwook Lee
Dimitris Papailiopoulos
SyDa
RALM
34
11
0
27 Jun 2024
Attention Instruction: Amplifying Attention in the Middle via Prompting
Attention Instruction: Amplifying Attention in the Middle via Prompting
Meiru Zhang
Zaiqiao Meng
Nigel Collier
51
4
0
24 Jun 2024
Insights into LLM Long-Context Failures: When Transformers Know but
  Don't Tell
Insights into LLM Long-Context Failures: When Transformers Know but Don't Tell
Taiming Lu
Muhan Gao
Kuai Yu
Adam Byerly
Daniel Khashabi
49
12
0
20 Jun 2024
CItruS: Chunked Instruction-aware State Eviction for Long Sequence
  Modeling
CItruS: Chunked Instruction-aware State Eviction for Long Sequence Modeling
Yu Bai
Xiyuan Zou
Heyan Huang
Sanxing Chen
Marc-Antoine Rondeau
Yang Gao
Jackie Chi Kit Cheung
39
4
0
17 Jun 2024
Mitigate Position Bias in Large Language Models via Scaling a Single
  Dimension
Mitigate Position Bias in Large Language Models via Scaling a Single Dimension
Yijiong Yu
Huiqiang Jiang
Xufang Luo
Qianhui Wu
Chin-Yew Lin
Dongsheng Li
Yuqing Yang
Yongfeng Huang
L. Qiu
46
9
0
04 Jun 2024
The CAP Principle for LLM Serving: A Survey of Long-Context Large
  Language Model Serving
The CAP Principle for LLM Serving: A Survey of Long-Context Large Language Model Serving
Pai Zeng
Zhenyu Ning
Jieru Zhao
Weihao Cui
Mengwei Xu
Liwei Guo
Xusheng Chen
Yizhou Shan
LLMAG
48
4
0
18 May 2024
LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios
  via Prompt Compression
LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression
Huiqiang Jiang
Qianhui Wu
Xufang Luo
Dongsheng Li
Chin-Yew Lin
Yuqing Yang
Lili Qiu
RALM
121
183
0
10 Oct 2023
Walking Down the Memory Maze: Beyond Context Limit through Interactive
  Reading
Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading
Howard Chen
Ramakanth Pasunuru
Jason Weston
Asli Celikyilmaz
RALM
68
72
0
08 Oct 2023
DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery
  through Sophisticated AI System Technologies
DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies
Shuaiwen Leon Song
Bonnie Kruft
Minjia Zhang
Conglong Li
Shiyang Chen
...
Arash Vahdat
Chaowei Xiao
Thomas Gibbs
Anima Anandkumar
R. Stevens
45
13
0
06 Oct 2023
LM-Infinite: Zero-Shot Extreme Length Generalization for Large Language
  Models
LM-Infinite: Zero-Shot Extreme Length Generalization for Large Language Models
Chi Han
Qifan Wang
Hao Peng
Wenhan Xiong
Yu Chen
Heng Ji
Sinong Wang
50
49
0
30 Aug 2023
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
253
695
0
27 Aug 2021
1