ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.22079
31
0

A Semantic-Enhanced Heterogeneous Graph Learning Method for Flexible Objects Recognition

28 March 2025
Kunshan Yang
Wenwei Luo
Yuguo Hu
Jiafu Yan
Mengmeng Jing
Lin Zuo
ArXivPDFHTML
Abstract

Flexible objects recognition remains a significant challenge due to its inherently diverse shapes and sizes, translucent attributes, and subtle inter-class differences. Graph-based models, such as graph convolution networks and graph vision models, are promising in flexible objects recognition due to their ability of capturing variable relations within the flexible objects. These methods, however, often focus on global visual relationships or fail to align semantic and visual information. To alleviate these limitations, we propose a semantic-enhanced heterogeneous graph learning method. First, an adaptive scanning module is employed to extract discriminative semantic context, facilitating the matching of flexible objects with varying shapes and sizes while aligning semantic and visual nodes to enhance cross-modal feature correlation. Second, a heterogeneous graph generation module aggregates global visual and local semantic node features, improving the recognition of flexible objects. Additionally, We introduce the FSCW, a large-scale flexible dataset curated from existing sources. We validate our method through extensive experiments on flexible datasets (FDA and FSCW), and challenge benchmarks (CIFAR-100 and ImageNet-Hard), demonstrating competitive performance.

View on arXiv
@article{yang2025_2503.22079,
  title={ A Semantic-Enhanced Heterogeneous Graph Learning Method for Flexible Objects Recognition },
  author={ Kunshan Yang and Wenwei Luo and Yuguo Hu and Jiafu Yan and Mengmeng Jing and Lin Zuo },
  journal={arXiv preprint arXiv:2503.22079},
  year={ 2025 }
}
Comments on this paper