ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.08533
63
32
v1v2v3v4v5 (latest)

A general multi-modal data learning method for Person Re-identification

21 January 2021
Yunpeng Gong
ArXiv (abs)PDFHTML
Abstract

This paper proposes a general multi-modal data learning method, which includes Global Homogeneous Transformation, Local Homogeneous Transformation and their combination. During ReID model training, on the one hand, it randomly selected a rectangular area in the RGB image and replace its color with the same rectangular area in corresponding homogeneous image, thus it generate a training image with different homogeneous areas; On the other hand, it convert an image into a homogeneous image. These two methods help the model to directly learn the relationship between different modalities in the Special ReID task. In single-modal ReID tasks, it can be used as an effective data augmentation. The experimental results show that our method achieves a performance improvement of up to 3.3% in single modal ReID task, and performance improvement in the Sketch Re-identification more than 8%. In addition, our experiments also show that this method is also very useful in adversarial training for adversarial defense. It can help the model learn faster and better from adversarial examples.

View on arXiv
Comments on this paper