Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.00340
Cited By
Attention Link: An Efficient Attention-Based Low Resource Machine Translation Architecture
1 February 2023
Zeping Min
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Attention Link: An Efficient Attention-Based Low Resource Machine Translation Architecture"
7 / 7 papers shown
Title
Linformer: Self-Attention with Linear Complexity
Sinong Wang
Belinda Z. Li
Madian Khabsa
Han Fang
Hao Ma
179
1,678
0
08 Jun 2020
Longformer: The Long-Document Transformer
Iz Beltagy
Matthew E. Peters
Arman Cohan
RALM
VLM
106
3,996
0
10 Apr 2020
When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?
Ye Qi
Devendra Singh Sachan
Matthieu Felix
Sarguna Padmanabhan
Graham Neubig
90
343
0
17 Apr 2018
Improving Neural Machine Translation Models with Monolingual Data
Rico Sennrich
Barry Haddow
Alexandra Birch
228
2,710
0
20 Nov 2015
Sequence to Sequence Learning with Neural Networks
Ilya Sutskever
Oriol Vinyals
Quoc V. Le
AIMat
326
20,491
0
10 Sep 2014
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
AIMat
424
27,205
0
01 Sep 2014
Efficient Estimation of Word Representations in Vector Space
Tomas Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
595
31,406
0
16 Jan 2013
1