Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.05591
Cited By
On the Power of Convolution Augmented Transformer
8 July 2024
Mingchen Li
Xuechen Zhang
Yixiao Huang
Samet Oymak
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Power of Convolution Augmented Transformer"
8 / 8 papers shown
Title
Mechanistic Design and Scaling of Hybrid Architectures
Michael Poli
Armin W. Thomas
Eric N. D. Nguyen
Pragaash Ponnusamy
Bjorn Deiseroth
...
Brian Hie
Stefano Ermon
Christopher Ré
Ce Zhang
Stefano Massaroli
MoE
51
21
0
26 Mar 2024
Simple linear attention language models balance the recall-throughput tradeoff
Simran Arora
Sabri Eyuboglu
Michael Zhang
Aman Timalsina
Silas Alberti
Dylan Zinsley
James Zou
Atri Rudra
Christopher Ré
44
62
0
28 Feb 2024
Repeat After Me: Transformers are Better than State Space Models at Copying
Samy Jelassi
David Brandfonbrener
Sham Kakade
Eran Malach
100
78
0
01 Feb 2024
Zoology: Measuring and Improving Recall in Efficient Language Models
Simran Arora
Sabri Eyuboglu
Aman Timalsina
Isys Johnson
Michael Poli
James Zou
Atri Rudra
Christopher Ré
64
66
0
08 Dec 2023
Resurrecting Recurrent Neural Networks for Long Sequences
Antonio Orvieto
Samuel L. Smith
Albert Gu
Anushan Fernando
Çağlar Gülçehre
Razvan Pascanu
Soham De
88
266
0
11 Mar 2023
Liquid Structural State-Space Models
Ramin Hasani
Mathias Lechner
Tsun-Hsuan Wang
Makram Chahine
Alexander Amini
Daniela Rus
AI4TS
104
95
0
26 Sep 2022
In-context Learning and Induction Heads
Catherine Olsson
Nelson Elhage
Neel Nanda
Nicholas Joseph
Nova Dassarma
...
Tom B. Brown
Jack Clark
Jared Kaplan
Sam McCandlish
C. Olah
250
460
0
24 Sep 2022
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
253
695
0
27 Aug 2021
1