Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.08315
Cited By
Learning long-range spatial dependencies with horizontal gated-recurrent units
21 May 2018
Drew Linsley
Junkyung Kim
Vijay Veerabadran
Thomas Serre
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning long-range spatial dependencies with horizontal gated-recurrent units"
34 / 34 papers shown
Title
PolaFormer: Polarity-aware Linear Attention for Vision Transformers
Weikang Meng
Yadan Luo
Xin Li
D. Jiang
Zheng Zhang
180
0
0
25 Jan 2025
Layer-Adaptive State Pruning for Deep State Space Models
Minseon Gwak
Seongrok Moon
Joohwan Ko
PooGyeon Park
25
0
0
05 Nov 2024
Sampling Foundational Transformer: A Theoretical Perspective
Viet Anh Nguyen
Minh Lenhat
Khoa Nguyen
Duong Duc Hieu
Dao Huu Hung
Truong-Son Hy
46
0
0
11 Aug 2024
Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences
Zicheng Liu
Siyuan Li
Li Wang
Zedong Wang
Yunfan Liu
Stan Z. Li
35
7
0
12 Jun 2024
Neither hype nor gloom do DNNs justice
Gaurav Malhotra
Christian Tsvetkov
B. D. Evans
24
117
0
08 Dec 2023
MIMONets: Multiple-Input-Multiple-Output Neural Networks Exploiting Computation in Superposition
Nicolas Menet
Michael Hersche
G. Karunaratne
Luca Benini
Abu Sebastian
Abbas Rahimi
36
13
0
05 Dec 2023
Efficiency 360: Efficient Vision Transformers
Badri N. Patro
Vijay Srinivas Agneeswaran
26
6
0
16 Feb 2023
Measuring uncertainty in human visual segmentation
Jonathan Vacher
Claire Launay
Pascal Mamassian
Ruben Coen-Cagli
21
1
0
18 Jan 2023
Efficient Long Sequence Modeling via State Space Augmented Transformer
Simiao Zuo
Xiaodong Liu
Jian Jiao
Denis Xavier Charles
Eren Manavoglu
Tuo Zhao
Jianfeng Gao
125
36
0
15 Dec 2022
Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Sungjun Cho
Seonwoo Min
Jinwoo Kim
Moontae Lee
Honglak Lee
Seunghoon Hong
40
3
0
27 Oct 2022
Fast-FNet: Accelerating Transformer Encoder Models via Efficient Fourier Layers
Nurullah Sevim
Ege Ozan Özyedek
Furkan Şahinuç
Aykut Koç
35
11
0
26 Sep 2022
Mega: Moving Average Equipped Gated Attention
Xuezhe Ma
Chunting Zhou
Xiang Kong
Junxian He
Liangke Gui
Graham Neubig
Jonathan May
Luke Zettlemoyer
33
183
0
21 Sep 2022
Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-Attention
Tong Yu
Ruslan Khalitov
Lei Cheng
Zhirong Yang
MoE
27
10
0
22 Apr 2022
ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention
Yang Liu
Jiaxiang Liu
L. Chen
Yuxiang Lu
Shi Feng
Zhida Feng
Yu Sun
Hao Tian
Huancheng Wu
Hai-feng Wang
28
9
0
23 Mar 2022
cosFormer: Rethinking Softmax in Attention
Zhen Qin
Weixuan Sun
Huicai Deng
Dongxu Li
Yunshen Wei
Baohong Lv
Junjie Yan
Lingpeng Kong
Yiran Zhong
26
212
0
17 Feb 2022
Flowformer: Linearizing Transformers with Conservation Flows
Haixu Wu
Jialong Wu
Jiehui Xu
Jianmin Wang
Mingsheng Long
14
90
0
13 Feb 2022
Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks
Noam Razin
Asaf Maman
Nadav Cohen
46
29
0
27 Jan 2022
Classification of Long Sequential Data using Circular Dilated Convolutional Neural Networks
Lei Cheng
Ruslan Khalitov
Tong Yu
Zhirong Yang
25
32
0
06 Jan 2022
TUNet: A Block-online Bandwidth Extension Model based on Transformers and Self-supervised Pretraining
Viet-Anh Nguyen
Anh H. T. Nguyen
Andy W. H. Khong
19
22
0
26 Oct 2021
The Challenge of Appearance-Free Object Tracking with Feedforward Neural Networks
Girik Malik
Drew Linsley
Thomas Serre
E. Mingolla
VOT
25
7
0
30 Sep 2021
Predictive coding feedback results in perceived illusory contours in a recurrent neural network
Zhaoyang Pang
Callum Biggs O'May
Bhavin Choksi
R. V. Rullen
26
30
0
03 Feb 2021
Recurrent neural circuits for contour detection
Drew Linsley
Junkyung Kim
A. Ashok
Thomas Serre
24
45
0
29 Oct 2020
Learning Part Boundaries from 3D Point Clouds
Marios Loizou
Melinos Averkiou
E. Kalogerakis
3DPC
20
33
0
15 Jul 2020
Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency
Robert Geirhos
Kristof Meding
Felix Wichmann
19
117
0
30 Jun 2020
Learning Physical Graph Representations from Visual Scenes
Daniel M. Bear
Chaofei Fan
Damian Mrowca
Yunzhu Li
S. Alter
...
Jeremy Schwartz
Li Fei-Fei
Jiajun Wu
J. Tenenbaum
Daniel L. K. Yamins
SSL
GNN
SSeg
AI4CE
45
79
0
22 Jun 2020
Stable and expressive recurrent vision models
Drew Linsley
A. Ashok
L. Govindarajan
Rex G Liu
Thomas Serre
16
44
0
22 May 2020
Going in circles is the way forward: the role of recurrence in visual inference
R. S. V. Bergen
N. Kriegeskorte
17
82
0
26 Mar 2020
Disentangling neural mechanisms for perceptual grouping
Junkyung Kim
Drew Linsley
Kalpit C. Thakkar
Thomas Serre
OCL
26
54
0
04 Jun 2019
Crowding in humans is unlike that in convolutional neural networks
Ben Lonnqvist
A. Clarke
R. Chakravarthi
19
11
0
01 Mar 2019
Robust neural circuit reconstruction from serial electron microscopy with convolutional recurrent networks
Drew Linsley
Junkyung Kim
D. Berson
Thomas Serre
3DV
30
17
0
28 Nov 2018
Task-Driven Convolutional Recurrent Models of the Visual System
Aran Nayebi
Daniel M. Bear
J. Kubilius
Kohitij Kar
Surya Ganguli
David Sussillo
J. DiCarlo
Daniel L. K. Yamins
12
150
0
20 Jun 2018
The Roles of Supervised Machine Learning in Systems Neuroscience
Joshua I. Glaser
Ari S. Benjamin
Roozbeh Farhoodi
Konrad Paul Kording
18
114
0
21 May 2018
Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex
Q. Liao
T. Poggio
213
255
0
13 Apr 2016
Pixel Recurrent Neural Networks
Aaron van den Oord
Nal Kalchbrenner
Koray Kavukcuoglu
SSeg
GAN
251
2,550
0
25 Jan 2016
1