ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08771
  4. Cited By
Cross-Attention is All You Need: Adapting Pretrained Transformers for
  Machine Translation

Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation

18 April 2021
Mozhdeh Gheini
Xiang Ren
Jonathan May
    LRM
ArXivPDFHTML

Papers citing "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation"

3 / 53 papers shown
Title
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,858
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
Word Translation Without Parallel Data
Word Translation Without Parallel Data
Alexis Conneau
Guillaume Lample
MarcÁurelio Ranzato
Ludovic Denoyer
Hervé Jégou
189
1,639
0
11 Oct 2017
Previous
12