ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.05507
  4. Cited By
Compressive Transformers for Long-Range Sequence Modelling

Compressive Transformers for Long-Range Sequence Modelling

13 November 2019
Jack W. Rae
Anna Potapenko
Siddhant M. Jayakumar
Timothy Lillicrap
    RALMVLMKELM
ArXiv (abs)PDFHTML

Papers citing "Compressive Transformers for Long-Range Sequence Modelling"

50 / 232 papers shown
Title
Compression via Pre-trained Transformers: A Study on Byte-Level Multimodal Data
Compression via Pre-trained Transformers: A Study on Byte-Level Multimodal Data
David Heurtel-Depeiges
Anian Ruoss
J. Veness
Tim Genewein
218
2
0
07 Oct 2024
How to Train Long-Context Language Models (Effectively)
How to Train Long-Context Language Models (Effectively)
Tianyu Gao
Alexander Wettig
Howard Yen
Danqi Chen
RALM
202
48
0
03 Oct 2024
Selective Attention Improves Transformer
Selective Attention Improves Transformer
Yaniv Leviathan
Matan Kalman
Yossi Matias
119
12
0
03 Oct 2024
Extending Context Window of Large Language Models from a Distributional
  Perspective
Extending Context Window of Large Language Models from a Distributional Perspective
Yingsheng Wu
Yuxuan Gu
Xiaocheng Feng
Weihong Zhong
Dongliang Xu
Qing Yang
Hongtao Liu
Bing Qin
42
2
0
02 Oct 2024
House of Cards: Massive Weights in LLMs
House of Cards: Massive Weights in LLMs
Jaehoon Oh
Seungjun Shin
Dokwan Oh
125
1
0
02 Oct 2024
Self-evolving Agents with reflective and memory-augmented abilities
Self-evolving Agents with reflective and memory-augmented abilities
Xuechen Liang
Yangfan He
Yinghui Xia
Xinyuan Song
Jianhui Wang
...
Keqin Li
Jiaqi Chen
Jinsong Yang
Siyuan Chen
Tianyu Shi
LLMAGKELMCLL
151
4
0
01 Sep 2024
MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding
MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding
Jian Chen
Vashisth Tiwari
Ranajoy Sadhukhan
Zhuoming Chen
Jinyuan Shi
Ian En-Hsu Yen
Ian En-Hsu Yen
Avner May
Tianqi Chen
Beidi Chen
LRM
156
32
0
20 Aug 2024
Human-like Episodic Memory for Infinite Context LLMs
Human-like Episodic Memory for Infinite Context LLMs
Zafeirios Fountas
Martin A Benfeghoul
Adnan Oomerjee
Fenia Christopoulou
Gerasimos Lampouras
Haitham Bou-Ammar
Jun Wang
88
21
0
12 Jul 2024
Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context NLP
Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context NLP
Omer Goldman
Alon Jacovi
Aviv Slobodkin
Aviya Maimon
Ido Dagan
Reut Tsarfaty
122
11
0
29 Jun 2024
UIO-LLMs: Unbiased Incremental Optimization for Long-Context LLMs
UIO-LLMs: Unbiased Incremental Optimization for Long-Context LLMs
Wenhao Li
Mingbao Lin
Mingliang Xu
Shuicheng Yan
Rongrong Ji
71
0
0
26 Jun 2024
Blind Baselines Beat Membership Inference Attacks for Foundation Models
Blind Baselines Beat Membership Inference Attacks for Foundation Models
Debeshee Das
Jie Zhang
Florian Tramèr
MIALM
180
39
1
23 Jun 2024
PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal
  Documents
PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
Junjie Wang
Yin Zhang
Yatai Ji
Yuxiang Zhang
Chunyang Jiang
...
Bei Chen
Qunshu Lin
Minghao Liu
Ge Zhang
Wenhu Chen
97
3
0
20 Jun 2024
VoCo-LLaMA: Towards Vision Compression with Large Language Models
VoCo-LLaMA: Towards Vision Compression with Large Language Models
Xubing Ye
Yukang Gan
Xiaoke Huang
Yixiao Ge
Yansong Tang
MLLMVLM
130
28
0
18 Jun 2024
Hierarchical Compression of Text-Rich Graphs via Large Language Models
Hierarchical Compression of Text-Rich Graphs via Large Language Models
Shichang Zhang
Da Zheng
Jiani Zhang
Qi Zhu
Xiang Song
Soji Adeshina
Christos Faloutsos
George Karypis
Yizhou Sun
VLM
92
1
0
13 Jun 2024
Scalable Bayesian Learning with posteriors
Scalable Bayesian Learning with posteriors
Samuel Duffield
Kaelan Donatella
Johnathan Chiu
Phoebe Klett
Daniel Simpson
BDLUQCV
182
2
0
31 May 2024
Base of RoPE Bounds Context Length
Base of RoPE Bounds Context Length
Xin Men
Mingyu Xu
Bingning Wang
Qingyu Zhang
Hongyu Lin
Xianpei Han
Weipeng Chen
101
26
0
23 May 2024
Whole Genome Transformer for Gene Interaction Effects in Microbiome Habitat Specificity
Whole Genome Transformer for Gene Interaction Effects in Microbiome Habitat Specificity
Zhufeng Li
S. S. Cranganore
Nicholas D. Youngblut
Niki Kilbertus
117
2
0
09 May 2024
Recall Them All: Retrieval-Augmented Language Models for Long Object List Extraction from Long Documents
Recall Them All: Retrieval-Augmented Language Models for Long Object List Extraction from Long Documents
Sneha Singhania
Simon Razniewski
Gerhard Weikum
RALM
129
1
0
04 May 2024
Temporal Scaling Law for Large Language Models
Temporal Scaling Law for Large Language Models
Yizhe Xiong
Xiansheng Chen
Xin Ye
Hui Chen
Zijia Lin
...
Zhenpeng Su
Wei Huang
Jianwei Niu
Jiawei Han
Guiguang Ding
120
10
0
27 Apr 2024
Mamba-360: Survey of State Space Models as Transformer Alternative for
  Long Sequence Modelling: Methods, Applications, and Challenges
Mamba-360: Survey of State Space Models as Transformer Alternative for Long Sequence Modelling: Methods, Applications, and Challenges
Badri N. Patro
Vijay Srinivas Agneeswaran
Mamba
116
45
0
24 Apr 2024
CORM: Cache Optimization with Recent Message for Large Language Model
  Inference
CORM: Cache Optimization with Recent Message for Large Language Model Inference
Jincheng Dai
Zhuowei Huang
Haiyun Jiang
Chen Chen
Deng Cai
Wei Bi
Shuming Shi
109
3
0
24 Apr 2024
Revisiting Dynamic Evaluation: Online Adaptation for Large Language
  Models
Revisiting Dynamic Evaluation: Online Adaptation for Large Language Models
Amal Rannen-Triki
J. Bornschein
Razvan Pascanu
Marcus Hutter
Andras Gyorgy
Alexandre Galashov
Yee Whye Teh
Michalis K. Titsias
KELM
49
4
0
03 Mar 2024
CAMELoT: Towards Large Language Models with Training-Free Consolidated
  Associative Memory
CAMELoT: Towards Large Language Models with Training-Free Consolidated Associative Memory
Zexue He
Leonid Karlinsky
Donghyun Kim
Julian McAuley
Dmitry Krotov
Rogerio Feris
KELMRALM
86
11
0
21 Feb 2024
Streaming Sequence Transduction through Dynamic Compression
Streaming Sequence Transduction through Dynamic Compression
Weiting Tan
Yunmo Chen
Tongfei Chen
Guanghui Qin
Haoran Xu
Heidi C. Zhang
Benjamin Van Durme
Philipp Koehn
169
2
0
02 Feb 2024
Investigating Recurrent Transformers with Dynamic Halt
Investigating Recurrent Transformers with Dynamic Halt
Jishnu Ray Chowdhury
Cornelia Caragea
186
1
0
01 Feb 2024
LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
Hongye Jin
Xiaotian Han
Jingfeng Yang
Zhimeng Jiang
Zirui Liu
Chia-Yuan Chang
Huiyuan Chen
Helen Zhou
124
118
0
02 Jan 2024
LongQLoRA: Efficient and Effective Method to Extend Context Length of
  Large Language Models
LongQLoRA: Efficient and Effective Method to Extend Context Length of Large Language Models
Jianxin Yang
43
6
0
08 Nov 2023
GNAT: A General Narrative Alignment Tool
GNAT: A General Narrative Alignment Tool
T. Pial
Steven Skiena
52
4
0
07 Nov 2023
STONYBOOK: A System and Resource for Large-Scale Analysis of Novels
STONYBOOK: A System and Resource for Large-Scale Analysis of Novels
Charuta Pethe
Allen Kim
Rajesh Prabhakar
T. Pial
Steven Skiena
18
1
0
06 Nov 2023
CLEX: Continuous Length Extrapolation for Large Language Models
CLEX: Continuous Length Extrapolation for Large Language Models
Guanzheng Chen
Xin Li
Zaiqiao Meng
Shangsong Liang
Li Bing
102
32
0
25 Oct 2023
Did the Neurons Read your Book? Document-level Membership Inference for
  Large Language Models
Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models
Matthieu Meeus
Shubham Jain
Marek Rei
Yves-Alexandre de Montjoye
MIALM
83
33
0
23 Oct 2023
Walking Down the Memory Maze: Beyond Context Limit through Interactive
  Reading
Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading
Howard Chen
Ramakanth Pasunuru
Jason Weston
Asli Celikyilmaz
RALM
148
86
0
08 Oct 2023
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Yukang Chen
Shengju Qian
Haotian Tang
Xin Lai
Zhijian Liu
Song Han
Jiaya Jia
167
170
0
21 Sep 2023
A Data Source for Reasoning Embodied Agents
A Data Source for Reasoning Embodied Agents
Jack Lanchantin
Sainbayar Sukhbaatar
Gabriel Synnaeve
Yuxuan Sun
Kavya Srinet
Arthur Szlam
LM&RoLRM
57
5
0
14 Sep 2023
YaRN: Efficient Context Window Extension of Large Language Models
YaRN: Efficient Context Window Extension of Large Language Models
Bowen Peng
Jeffrey Quesnelle
Honglu Fan
Enrico Shippole
OSLM
115
264
0
31 Aug 2023
H$_2$O: Heavy-Hitter Oracle for Efficient Generative Inference of Large
  Language Models
H2_22​O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models
Zhenyu Zhang
Ying Sheng
Dinesh Manocha
Tianlong Chen
Lianmin Zheng
...
Yuandong Tian
Christopher Ré
Clark W. Barrett
Zhangyang Wang
Beidi Chen
VLM
189
314
0
24 Jun 2023
NF4 Isn't Information Theoretically Optimal (and that's Good)
NF4 Isn't Information Theoretically Optimal (and that's Good)
Davis Yoshida
MQ
92
10
0
12 Jun 2023
S$^{3}$: Increasing GPU Utilization during Generative Inference for
  Higher Throughput
S3^{3}3: Increasing GPU Utilization during Generative Inference for Higher Throughput
Yunho Jin
Chun-Feng Wu
David Brooks
Gu-Yeon Wei
110
71
0
09 Jun 2023
A Quantitative Review on Language Model Efficiency Research
A Quantitative Review on Language Model Efficiency Research
Meng Jiang
Hy Dang
Lingbo Tong
76
0
0
28 May 2023
FIT: Far-reaching Interleaved Transformers
FIT: Far-reaching Interleaved Transformers
Ting-Li Chen
Lala Li
108
13
0
22 May 2023
A Memory Model for Question Answering from Streaming Data Supported by
  Rehearsal and Anticipation of Coreference Information
A Memory Model for Question Answering from Streaming Data Supported by Rehearsal and Anticipation of Coreference Information
Vladimir Araujo
Alvaro Soto
Marie-Francine Moens
KELM
71
2
0
12 May 2023
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
L. Yu
Daniel Simig
Colin Flaherty
Armen Aghajanyan
Luke Zettlemoyer
M. Lewis
116
93
0
12 May 2023
Learning to Compress Prompts with Gist Tokens
Learning to Compress Prompts with Gist Tokens
Jesse Mu
Xiang Lisa Li
Noah D. Goodman
VLM
146
227
0
17 Apr 2023
Obstacle-Transformer: A Trajectory Prediction Network Based on
  Surrounding Trajectories
Obstacle-Transformer: A Trajectory Prediction Network Based on Surrounding Trajectories
Wendong Zhang
Qingjie Chai
Quanqi Zhang
Chengwei Wu
51
6
0
16 Apr 2023
Koala: An Index for Quantifying Overlaps with Pre-training Corpora
Koala: An Index for Quantifying Overlaps with Pre-training Corpora
Thuy-Trang Vu
Xuanli He
Gholamreza Haffari
Ehsan Shareghi
CLL
73
15
0
26 Mar 2023
Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient
  Vision Transformers
Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers
Cong Wei
Brendan Duke
R. Jiang
P. Aarabi
Graham W. Taylor
Florian Shkurti
ViT
107
17
0
24 Mar 2023
A Survey on Long Text Modeling with Transformers
A Survey on Long Text Modeling with Transformers
Zican Dong
Tianyi Tang
Lunyi Li
Wayne Xin Zhao
VLM
140
57
0
28 Feb 2023
Full Stack Optimization of Transformer Inference: a Survey
Full Stack Optimization of Transformer Inference: a Survey
Sehoon Kim
Coleman Hooper
Thanakul Wattanawong
Minwoo Kang
Ruohan Yan
...
Qijing Huang
Kurt Keutzer
Michael W. Mahoney
Y. Shao
A. Gholami
MQ
163
106
0
27 Feb 2023
Hyena Hierarchy: Towards Larger Convolutional Language Models
Hyena Hierarchy: Towards Larger Convolutional Language Models
Michael Poli
Stefano Massaroli
Eric Q. Nguyen
Daniel Y. Fu
Tri Dao
S. Baccus
Yoshua Bengio
Stefano Ermon
Christopher Ré
VLM
177
314
0
21 Feb 2023
Neural Attention Memory
Neural Attention Memory
Hyoungwook Nam
S. Seo
HAI
54
1
0
18 Feb 2023
Previous
12345
Next