ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.04241
  4. Cited By
ReadTwice: Reading Very Large Documents with Memories

ReadTwice: Reading Very Large Documents with Memories

10 May 2021
Yury Zemlyanskiy
Joshua Ainslie
Michiel de Jong
Philip Pham
Ilya Eckstein
Fei Sha
    AIMat
    RALM
ArXivPDFHTML

Papers citing "ReadTwice: Reading Very Large Documents with Memories"

11 / 11 papers shown
Title
Uncertainty Guided Global Memory Improves Multi-Hop Question Answering
Uncertainty Guided Global Memory Improves Multi-Hop Question Answering
Alsu Sagirova
Andrey Kravchenko
RALM
28
1
0
29 Nov 2023
MEMORY-VQ: Compression for Tractable Internet-Scale Memory
MEMORY-VQ: Compression for Tractable Internet-Scale Memory
Yury Zemlyanskiy
Michiel de Jong
Luke Vilnis
Santiago Ontañón
William W. Cohen
Sumit Sanghai
Joshua Ainslie
RALM
MQ
35
0
0
28 Aug 2023
CoLT5: Faster Long-Range Transformers with Conditional Computation
CoLT5: Faster Long-Range Transformers with Conditional Computation
Joshua Ainslie
Tao Lei
Michiel de Jong
Santiago Ontañón
Siddhartha Brahma
...
Mandy Guo
James Lee-Thorp
Yi Tay
Yun-hsuan Sung
Sumit Sanghai
LLMAG
39
63
0
17 Mar 2023
FiDO: Fusion-in-Decoder optimized for stronger performance and faster
  inference
FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Michiel de Jong
Yury Zemlyanskiy
Joshua Ainslie
Nicholas FitzGerald
Sumit Sanghai
Fei Sha
William W. Cohen
VLM
23
32
0
15 Dec 2022
Training Language Models with Memory Augmentation
Training Language Models with Memory Augmentation
Zexuan Zhong
Tao Lei
Danqi Chen
RALM
249
128
0
25 May 2022
Memorizing Transformers
Memorizing Transformers
Yuhuai Wu
M. Rabe
DeLesley S. Hutchins
Christian Szegedy
RALM
30
173
0
16 Mar 2022
LongT5: Efficient Text-To-Text Transformer for Long Sequences
LongT5: Efficient Text-To-Text Transformer for Long Sequences
Mandy Guo
Joshua Ainslie
David C. Uthus
Santiago Ontanon
Jianmo Ni
Yun-hsuan Sung
Yinfei Yang
VLM
31
306
0
15 Dec 2021
Mention Memory: incorporating textual knowledge into Transformers
  through entity mention attention
Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Michiel de Jong
Yury Zemlyanskiy
Nicholas FitzGerald
Fei Sha
William W. Cohen
RALM
29
46
0
12 Oct 2021
Recursively Summarizing Books with Human Feedback
Recursively Summarizing Books with Human Feedback
Jeff Wu
Long Ouyang
Daniel M. Ziegler
Nissan Stiennon
Ryan J. Lowe
Jan Leike
Paul Christiano
ALM
40
296
0
22 Sep 2021
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
288
2,028
0
28 Jul 2020
Knowledge Enhanced Contextual Word Representations
Knowledge Enhanced Contextual Word Representations
Matthew E. Peters
Mark Neumann
IV RobertL.Logan
Roy Schwartz
Vidur Joshi
Sameer Singh
Noah A. Smith
234
656
0
09 Sep 2019
1