ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.01509
  4. Cited By
Training on Synthetic Noise Improves Robustness to Natural Noise in
  Machine Translation

Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation

5 February 2019
Vladimir Karpukhin
Omer Levy
Jacob Eisenstein
Marjan Ghazvininejad
ArXivPDFHTML

Papers citing "Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation"

13 / 13 papers shown
Title
Specification Overfitting in Artificial Intelligence
Specification Overfitting in Artificial Intelligence
Benjamin Roth
Pedro Henrique Luz de Araujo
Yuxi Xia
Saskia Kaltenbrunner
Christoph Korab
176
1
0
13 Mar 2024
Mining Naturally-occurring Corrections and Paraphrases from Wikipedia's
  Revision History
Mining Naturally-occurring Corrections and Paraphrases from Wikipedia's Revision History
Aurélien Max
Guillaume Wisniewski
KELM
41
78
0
25 Feb 2022
Improving Robustness of Machine Translation with Synthetic Noise
Improving Robustness of Machine Translation with Synthetic Noise
Vaibhav
Sumeet Singh
Craig Alan Stewart
Graham Neubig
49
83
0
25 Feb 2019
MTNT: A Testbed for Machine Translation of Noisy Text
MTNT: A Testbed for Machine Translation of Noisy Text
Paul Michel
Graham Neubig
55
146
0
02 Sep 2018
Towards Robust Neural Machine Translation
Towards Robust Neural Machine Translation
Yong Cheng
Zhaopeng Tu
Fandong Meng
Junjie Zhai
Yang Liu
AAML
41
161
0
16 May 2018
Synthetic and Natural Noise Both Break Neural Machine Translation
Synthetic and Natural Noise Both Break Neural Machine Translation
Yonatan Belinkov
Yonatan Bisk
111
740
0
06 Nov 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
687
131,526
0
12 Jun 2017
How Robust Are Character-Based Word Embeddings in Tagging and MT Against
  Wrod Scramlbing or Randdm Nouse?
How Robust Are Character-Based Word Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse?
G. Heigold
G. Neumann
Josef van Genabith
58
62
0
14 Apr 2017
Robsut Wrod Reocginiton via semi-Character Recurrent Neural Network
Robsut Wrod Reocginiton via semi-Character Recurrent Neural Network
Keisuke Sakaguchi
Kevin Duh
Matt Post
Benjamin Van Durme
49
89
0
07 Aug 2016
Neural Machine Translation of Rare Words with Subword Units
Neural Machine Translation of Rare Words with Subword Units
Rico Sennrich
Barry Haddow
Alexandra Birch
215
7,735
0
31 Aug 2015
Character-Aware Neural Language Models
Character-Aware Neural Language Models
Yoon Kim
Yacine Jernite
David Sontag
Alexander M. Rush
95
1,669
0
26 Aug 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
274
19,049
0
20 Dec 2014
Dropout Training as Adaptive Regularization
Dropout Training as Adaptive Regularization
Stefan Wager
Sida I. Wang
Percy Liang
129
599
0
04 Jul 2013
1