ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19507
56
0

Multimodal Machine Translation with Visual Scene Graph Pruning

26 May 2025
Chenyu Lu
Shiliang Sun
Jing Zhao
N. Zhang
Tengfei Song
Hao Yang
ArXivPDFHTML
Abstract

Multimodal machine translation (MMT) seeks to address the challenges posed by linguistic polysemy and ambiguity in translation tasks by incorporating visual information. A key bottleneck in current MMT research is the effective utilization of visual data. Previous approaches have focused on extracting global or region-level image features and using attention or gating mechanisms for multimodal information fusion. However, these methods have not adequately tackled the issue of visual information redundancy in MMT, nor have they proposed effective solutions. In this paper, we introduce a novel approach--multimodal machine translation with visual Scene Graph Pruning (PSG), which leverages language scene graph information to guide the pruning of redundant nodes in visual scene graphs, thereby reducing noise in downstream translation tasks. Through extensive comparative experiments with state-of-the-art methods and ablation studies, we demonstrate the effectiveness of the PSG model. Our results also highlight the promising potential of visual information pruning in advancing the field of MMT.

View on arXiv
@article{lu2025_2505.19507,
  title={ Multimodal Machine Translation with Visual Scene Graph Pruning },
  author={ Chenyu Lu and Shiliang Sun and Jing Zhao and Nan Zhang and Tengfei Song and Hao Yang },
  journal={arXiv preprint arXiv:2505.19507},
  year={ 2025 }
}
Comments on this paper