Papers
Communities
Organizations
Events
Blog
Pricing
Search
Open menu
Home
Papers
2412.16545
Cited By
v1
v2 (latest)
Attention Entropy is a Key Factor: An Analysis of Parallel Context Encoding with Full-attention-based Pre-trained Language Models
21 December 2024
Zhisong Zhang
Yan Wang
Xinting Huang
Tianqing Fang
Han Zhang
Chenlong Deng
Shuaiyi Li
Dong Yu
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Attention Entropy is a Key Factor: An Analysis of Parallel Context Encoding with Full-attention-based Pre-trained Language Models"
6 / 6 papers shown
Title
InComeS: Integrating Compression and Selection Mechanisms into LLMs for Efficient Model Editing
Shuaiyi Li
Zhisong Zhang
Yang Deng
Chenlong Deng
Tianqing Fang
Hongming Zhang
Haitao Mi
Dong Yu
Wai Lam
KELM
66
0
0
28 May 2025
Understanding Differential Transformer Unchains Pretrained Self-Attentions
Chaerin Kong
Jiho Jang
Nojun Kwak
105
0
0
22 May 2025
SkyLadder: Better and Faster Pretraining via Context Window Scheduling
Tongyao Zhu
Qian Liu
Haonan Wang
Shiqi Chen
Xiangming Gu
Tianyu Pang
Min-Yen Kan
102
0
0
19 Mar 2025
KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
Jingbo Yang
Bairu Hou
Wei Wei
Yujia Bao
Shiyu Chang
VLM
197
3
0
21 Feb 2025
Spatio-Temporal Control for Masked Motion Synthesis
Ekkasit Pinyoanuntapong
Muhammad Usama Saleem
Korrawe Karunratanakul
Pu Wang
Hongfei Xue
Chong Chen
Chuan Guo
Junli Cao
J. Ren
Sergey Tulyakov
VGen
99
7
0
14 Oct 2024
Efficient Intent Detection with Dual Sentence Encoders
I. Casanueva
Tadas Temvcinas
D. Gerz
Matthew Henderson
Ivan Vulić
VLM
379
481
0
10 Mar 2020
1