ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.00340
  4. Cited By
Attention Link: An Efficient Attention-Based Low Resource Machine
  Translation Architecture

Attention Link: An Efficient Attention-Based Low Resource Machine Translation Architecture

1 February 2023
Zeping Min
ArXivPDFHTML

Papers citing "Attention Link: An Efficient Attention-Based Low Resource Machine Translation Architecture"

12 / 12 papers shown
Title
Optimizing Transformer for Low-Resource Neural Machine Translation
Optimizing Transformer for Low-Resource Neural Machine Translation
Ali Araabi
Christof Monz
VLM
57
78
0
04 Nov 2020
A Simple but Tough-to-Beat Data Augmentation Approach for Natural
  Language Understanding and Generation
A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation
Dinghan Shen
Ming Zheng
Yelong Shen
Yanru Qu
Weizhu Chen
AAML
69
130
0
29 Sep 2020
Linformer: Self-Attention with Linear Complexity
Linformer: Self-Attention with Linear Complexity
Sinong Wang
Belinda Z. Li
Madian Khabsa
Han Fang
Hao Ma
185
1,694
0
08 Jun 2020
Longformer: The Long-Document Transformer
Longformer: The Long-Document Transformer
Iz Beltagy
Matthew E. Peters
Arman Cohan
RALM
VLM
128
4,048
0
10 Apr 2020
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine
  Translation
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation
Alessandro Raganato
Yves Scherrer
Jörg Tiedemann
71
92
0
24 Feb 2020
Learning Deep Transformer Models for Machine Translation
Learning Deep Transformer Models for Machine Translation
Qiang Wang
Bei Li
Tong Xiao
Jingbo Zhu
Changliang Li
Derek F. Wong
Lidia S. Chao
70
670
0
05 Jun 2019
When and Why are Pre-trained Word Embeddings Useful for Neural Machine
  Translation?
When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?
Ye Qi
Devendra Singh Sachan
Matthieu Felix
Sarguna Padmanabhan
Graham Neubig
90
343
0
17 Apr 2018
Improving Neural Machine Translation Models with Monolingual Data
Improving Neural Machine Translation Models with Monolingual Data
Rico Sennrich
Barry Haddow
Alexandra Birch
241
2,716
0
20 Nov 2015
Sequence to Sequence Learning with Neural Networks
Sequence to Sequence Learning with Neural Networks
Ilya Sutskever
Oriol Vinyals
Quoc V. Le
AIMat
368
20,518
0
10 Sep 2014
Neural Machine Translation by Jointly Learning to Align and Translate
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
AIMat
499
27,263
0
01 Sep 2014
Learning Phrase Representations using RNN Encoder-Decoder for
  Statistical Machine Translation
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
Kyunghyun Cho
B. V. Merrienboer
Çağlar Gülçehre
Dzmitry Bahdanau
Fethi Bougares
Holger Schwenk
Yoshua Bengio
AIMat
808
23,310
0
03 Jun 2014
Efficient Estimation of Word Representations in Vector Space
Efficient Estimation of Word Representations in Vector Space
Tomas Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
621
31,469
0
16 Jan 2013
1