ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21224
25
0

A Representation Level Analysis of NMT Model Robustness to Grammatical Errors

27 May 2025
Abderrahmane Issam
Yusuf Can Semerci
Jan Scholtes
Gerasimos Spanakis
ArXiv (abs)PDFHTML
Main:9 Pages
16 Figures
Bibliography:4 Pages
5 Tables
Appendix:10 Pages
Abstract

Understanding robustness is essential for building reliable NLP systems. Unfortunately, in the context of machine translation, previous work mainly focused on documenting robustness failures or improving robustness. In contrast, we study robustness from a model representation perspective by looking at internal model representations of ungrammatical inputs and how they evolve through model layers. For this purpose, we perform Grammatical Error Detection (GED) probing and representational similarity analysis. Our findings indicate that the encoder first detects the grammatical error, then corrects it by moving its representation toward the correct form. To understand what contributes to this process, we turn to the attention mechanism where we identify what we term Robustness Heads. We find that Robustness Heads attend to interpretable linguistic units when responding to grammatical errors, and that when we fine-tune models for robustness, they tend to rely more on Robustness Heads for updating the ungrammatical word representation.

View on arXiv
@article{issam2025_2505.21224,
  title={ A Representation Level Analysis of NMT Model Robustness to Grammatical Errors },
  author={ Abderrahmane Issam and Yusuf Can Semerci and Jan Scholtes and Gerasimos Spanakis },
  journal={arXiv preprint arXiv:2505.21224},
  year={ 2025 }
}
Comments on this paper