ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.05853
  4. Cited By
Attention Weights in Transformer NMT Fail Aligning Words Between
  Sequences but Largely Explain Model Predictions

Attention Weights in Transformer NMT Fail Aligning Words Between Sequences but Largely Explain Model Predictions

13 September 2021
Javier Ferrando
Marta R. Costa-jussá
ArXivPDFHTML

Papers citing "Attention Weights in Transformer NMT Fail Aligning Words Between Sequences but Largely Explain Model Predictions"

8 / 8 papers shown
Title
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Tobias Leemann
Alina Fastowski
Felix Pfeiffer
Gjergji Kasneci
62
4
0
10 Jan 2025
Non-Fluent Synthetic Target-Language Data Improve Neural Machine
  Translation
Non-Fluent Synthetic Target-Language Data Improve Neural Machine Translation
Víctor M. Sánchez-Cartagena
Miquel Espla-Gomis
J. A. Pérez-Ortiz
F. Sánchez-Martínez
35
4
0
29 Jan 2024
Predicting Human Translation Difficulty with Neural Machine Translation
Predicting Human Translation Difficulty with Neural Machine Translation
Zheng Wei Lim
Ekaterina Vylomova
Charles Kemp
Trevor Cohn
32
0
0
19 Dec 2023
Optimal Transport for Unsupervised Hallucination Detection in Neural
  Machine Translation
Optimal Transport for Unsupervised Hallucination Detection in Neural Machine Translation
Nuno M. Guerreiro
Pierre Colombo
Pablo Piantanida
André F.T. Martins
30
10
0
19 Dec 2022
Word Alignment in the Era of Deep Learning: A Tutorial
Word Alignment in the Era of Deep Learning: A Tutorial
Bryan Li
31
5
0
30 Nov 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
114
107
0
22 Sep 2022
SBERT studies Meaning Representations: Decomposing Sentence Embeddings
  into Explainable Semantic Features
SBERT studies Meaning Representations: Decomposing Sentence Embeddings into Explainable Semantic Features
Juri Opitz
Anette Frank
34
33
0
14 Jun 2022
Towards Opening the Black Box of Neural Machine Translation: Source and
  Target Interpretations of the Transformer
Towards Opening the Black Box of Neural Machine Translation: Source and Target Interpretations of the Transformer
Javier Ferrando
Gerard I. Gállego
Belen Alastruey
Carlos Escolano
Marta R. Costa-jussá
30
44
0
23 May 2022
1