ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.13140
  4. Cited By
Extrapolating Multilingual Understanding Models as Multilingual
  Generators

Extrapolating Multilingual Understanding Models as Multilingual Generators

22 May 2023
Bohong Wu
Fei Yuan
Hai Zhao
Lei Li
Jingjing Xu
    AI4CE
ArXivPDFHTML

Papers citing "Extrapolating Multilingual Understanding Models as Multilingual Generators"

18 / 18 papers shown
Title
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BigScience Workshop
:
Teven Le Scao
Angela Fan
Christopher Akiki
...
Zhongli Xie
Zifan Ye
M. Bras
Younes Belkada
Thomas Wolf
VLM
319
2,364
0
09 Nov 2022
DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models
DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models
Shansan Gong
Mukai Li
Jiangtao Feng
Zhiyong Wu
Lingpeng Kong
83
327
0
17 Oct 2022
Gradient-Based Constrained Sampling from Language Models
Gradient-Based Constrained Sampling from Language Models
Sachin Kumar
Biswajit Paria
Yulia Tsvetkov
BDL
62
55
0
25 May 2022
COLD Decoding: Energy-based Constrained Text Generation with Langevin
  Dynamics
COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics
Lianhui Qin
Sean Welleck
Daniel Khashabi
Yejin Choi
AI4CE
85
147
0
23 Feb 2022
Step-unrolled Denoising Autoencoders for Text Generation
Step-unrolled Denoising Autoencoders for Text Generation
Nikolay Savinov
Junyoung Chung
Mikolaj Binkowski
Erich Elsen
Aaron van den Oord
DiffM
66
118
0
13 Dec 2021
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better
  Translators
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators
Zhixing Tan
Xiangwen Zhang
Shuo Wang
Yang Liu
VLM
LRM
228
52
0
13 Oct 2021
Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation
Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation
Cunxiao Du
Zhaopeng Tu
Jing Jiang
53
88
0
09 Jun 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
483
3,952
0
18 Apr 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
189
4,209
0
01 Jan 2021
Glancing Transformer for Non-Autoregressive Neural Machine Translation
Glancing Transformer for Non-Autoregressive Neural Machine Translation
Lihua Qian
Hao Zhou
Yu Bao
Mingxuan Wang
Lin Qiu
Weinan Zhang
Yong Yu
Lei Li
71
157
0
18 Aug 2020
Non-Autoregressive Machine Translation with Latent Alignments
Non-Autoregressive Machine Translation with Latent Alignments
Chitwan Saharia
William Chan
Saurabh Saxena
Mohammad Norouzi
40
158
0
16 Apr 2020
Aligned Cross Entropy for Non-Autoregressive Machine Translation
Aligned Cross Entropy for Non-Autoregressive Machine Translation
Marjan Ghazvininejad
Vladimir Karpukhin
Luke Zettlemoyer
Omer Levy
60
115
0
03 Apr 2020
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating
  Cross-lingual Generalization
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization
Junjie Hu
Sebastian Ruder
Aditya Siddhant
Graham Neubig
Orhan Firat
Melvin Johnson
ELM
151
966
0
24 Mar 2020
Unsupervised Cross-lingual Representation Learning at Scale
Unsupervised Cross-lingual Representation Learning at Scale
Alexis Conneau
Kartikay Khandelwal
Naman Goyal
Vishrav Chaudhary
Guillaume Wenzek
Francisco Guzmán
Edouard Grave
Myle Ott
Luke Zettlemoyer
Veselin Stoyanov
188
6,496
0
05 Nov 2019
Simple, Scalable Adaptation for Neural Machine Translation
Simple, Scalable Adaptation for Neural Machine Translation
Ankur Bapna
N. Arivazhagan
Orhan Firat
AI4CE
89
413
0
18 Sep 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
475
24,160
0
26 Jul 2019
A Call for Clarity in Reporting BLEU Scores
A Call for Clarity in Reporting BLEU Scores
Matt Post
118
2,941
0
23 Apr 2018
When and Why are Pre-trained Word Embeddings Useful for Neural Machine
  Translation?
When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?
Ye Qi
Devendra Singh Sachan
Matthieu Felix
Sarguna Padmanabhan
Graham Neubig
90
343
0
17 Apr 2018
1