ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.20620
  4. Cited By
The Unreasonable Effectiveness of Random Target Embeddings for
  Continuous-Output Neural Machine Translation
v1v2 (latest)

The Unreasonable Effectiveness of Random Target Embeddings for Continuous-Output Neural Machine Translation

31 October 2023
Evgeniia Tokarchuk
Vlad Niculae
ArXiv (abs)PDFHTML

Papers citing "The Unreasonable Effectiveness of Random Target Embeddings for Continuous-Output Neural Machine Translation"

22 / 22 papers shown
Title
Diffusion-LM Improves Controllable Text Generation
Diffusion-LM Improves Controllable Text Generation
Xiang Lisa Li
John Thickstun
Ishaan Gulrajani
Percy Liang
Tatsunori B. Hashimoto
AI4CE
242
833
0
27 May 2022
Problems with Cosine as a Measure of Embedding Similarity for High
  Frequency Words
Problems with Cosine as a Measure of Embedding Similarity for High Frequency Words
Kaitlyn Zhou
Kawin Ethayarajh
Dallas Card
Dan Jurafsky
91
69
0
10 May 2022
Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in
  Practice
Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in Practice
Andreas Grivas
Nikolay Bogoychev
Adam Lopez
43
10
0
12 Mar 2022
von Mises-Fisher Loss: An Exploration of Embedding Geometries for
  Supervised Learning
von Mises-Fisher Loss: An Exploration of Embedding Geometries for Supervised Learning
Tyler R. Scott
Andrew C. Gallagher
Michael C. Mozer
62
42
0
29 Mar 2021
Adaptive Semiparametric Language Models
Adaptive Semiparametric Language Models
Dani Yogatama
Cyprien de Masson dÁutume
Lingpeng Kong
KELMRALM
85
100
0
04 Feb 2021
Nearest Neighbor Machine Translation
Nearest Neighbor Machine Translation
Urvashi Khandelwal
Angela Fan
Dan Jurafsky
Luke Zettlemoyer
M. Lewis
RALM
73
286
0
01 Oct 2020
Language-agnostic BERT Sentence Embedding
Language-agnostic BERT Sentence Embedding
Fangxiaoyu Feng
Yinfei Yang
Daniel Cer
N. Arivazhagan
Wei Wang
176
915
0
03 Jul 2020
Contextual Embeddings: When Are They Worth It?
Contextual Embeddings: When Are They Worth It?
Simran Arora
Avner May
Jian Zhang
Christopher Ré
53
61
0
18 May 2020
Multilingual Denoising Pre-training for Neural Machine Translation
Multilingual Denoising Pre-training for Neural Machine Translation
Yinhan Liu
Jiatao Gu
Naman Goyal
Xian Li
Sergey Edunov
Marjan Ghazvininejad
M. Lewis
Luke Zettlemoyer
AI4CEAIMat
128
1,816
0
22 Jan 2020
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Nils Reimers
Iryna Gurevych
1.3K
12,332
0
27 Aug 2019
BERTScore: Evaluating Text Generation with BERT
BERTScore: Evaluating Text Generation with BERT
Tianyi Zhang
Varsha Kishore
Felix Wu
Kilian Q. Weinberger
Yoav Artzi
371
5,872
0
21 Apr 2019
fairseq: A Fast, Extensible Toolkit for Sequence Modeling
fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Myle Ott
Sergey Edunov
Alexei Baevski
Angela Fan
Sam Gross
Nathan Ng
David Grangier
Michael Auli
VLMFaML
132
3,159
0
01 Apr 2019
compare-mt: A Tool for Holistic Comparison of Language Generation
  Systems
compare-mt: A Tool for Holistic Comparison of Language Generation Systems
Graham Neubig
Zi-Yi Dou
Junjie Hu
Paul Michel
Danish Pruthi
Xinyi Wang
John Wieting
ELM
67
116
0
19 Mar 2019
Von Mises-Fisher Loss for Training Sequence to Sequence Models with
  Continuous Outputs
Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs
Sachin Kumar
Yulia Tsvetkov
65
71
0
10 Dec 2018
SentencePiece: A simple and language independent subword tokenizer and
  detokenizer for Neural Text Processing
SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing
Taku Kudo
John Richardson
209
3,534
0
19 Aug 2018
Subword Regularization: Improving Neural Network Translation Models with
  Multiple Subword Candidates
Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates
Taku Kudo
232
1,173
0
29 Apr 2018
A Call for Clarity in Reporting BLEU Scores
A Call for Clarity in Reporting BLEU Scores
Matt Post
187
2,998
0
23 Apr 2018
Advances in Pre-Training Distributed Word Representations
Advances in Pre-Training Distributed Word Representations
Tomas Mikolov
Edouard Grave
Piotr Bojanowski
Christian Puhrsch
Armand Joulin
AI4TSVLM
147
1,244
0
26 Dec 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
808
132,725
0
12 Jun 2017
Enriching Word Vectors with Subword Information
Enriching Word Vectors with Subword Information
Piotr Bojanowski
Edouard Grave
Armand Joulin
Tomas Mikolov
NAISSLVLM
234
9,986
0
15 Jul 2016
Improving zero-shot learning by mitigating the hubness problem
Improving zero-shot learning by mitigating the hubness problem
Georgiana Dinu
Angeliki Lazaridou
Marco Baroni
VLM
83
380
0
20 Dec 2014
Efficient Estimation of Word Representations in Vector Space
Efficient Estimation of Word Representations in Vector Space
Tomas Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
693
31,571
0
16 Jan 2013
1