ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.06609
  4. Cited By
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better
  Translators

MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators

13 October 2021
Zhixing Tan
Xiangwen Zhang
Shuo Wang
Yang Liu
    VLM
    LRM
ArXivPDFHTML

Papers citing "MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators"

14 / 14 papers shown
Title
Implicit Discourse Relation Classification For Nigerian Pidgin
Implicit Discourse Relation Classification For Nigerian Pidgin
Muhammed Saeed
Peter Bourgonje
Vera Demberg
32
1
0
26 Jun 2024
Gradable ChatGPT Translation Evaluation
Gradable ChatGPT Translation Evaluation
Hui Jiao
Bei Peng
Lu Zong
Xiaojun Zhang
Xinwei Li
33
2
0
18 Jan 2024
Domain Aligned Prefix Averaging for Domain Generalization in Abstractive
  Summarization
Domain Aligned Prefix Averaging for Domain Generalization in Abstractive Summarization
Pranav Ajit Nair
Sukomal Pal
Pradeepika Verm
MoMe
32
2
0
26 May 2023
Extrapolating Multilingual Understanding Models as Multilingual
  Generators
Extrapolating Multilingual Understanding Models as Multilingual Generators
Bohong Wu
Fei Yuan
Hai Zhao
Lei Li
Jingjing Xu
AI4CE
25
2
0
22 May 2023
LabelPrompt: Effective Prompt-based Learning for Relation Classification
LabelPrompt: Effective Prompt-based Learning for Relation Classification
W. Zhang
Xiaoning Song
Zhenhua Feng
Tianyang Xu
Xiaojun Wu
VLM
32
4
0
16 Feb 2023
PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot
  Learners
PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot Learners
Canyu Chen
Kai Shu
VLM
26
8
0
18 May 2022
GPT-NeoX-20B: An Open-Source Autoregressive Language Model
GPT-NeoX-20B: An Open-Source Autoregressive Language Model
Sid Black
Stella Biderman
Eric Hallahan
Quentin G. Anthony
Leo Gao
...
Shivanshu Purohit
Laria Reynolds
J. Tow
Benqi Wang
Samuel Weinbach
63
800
0
14 Apr 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for
  Pre-trained Language Models
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
15
196
0
14 Mar 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
805
0
14 Oct 2021
Multilingual Translation via Grafting Pre-trained Language Models
Multilingual Translation via Grafting Pre-trained Language Models
Zewei Sun
Mingxuan Wang
Lei Li
AI4CE
188
22
0
11 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,844
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,918
0
31 Dec 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,817
0
17 Sep 2019
Six Challenges for Neural Machine Translation
Six Challenges for Neural Machine Translation
Philipp Koehn
Rebecca Knowles
AAML
AIMat
212
1,207
0
12 Jun 2017
1