ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.07520
65
19

Instruct-ReID: A Multi-purpose Person Re-identification Task with Instructions

13 June 2023
Weizhen He
Yihe Deng
Shixiang Tang
Qihao Chen
Qingsong Xie
Yizhou Wang
Lei Bai
Feng Zhu
Rui Zhao
Wanli Ouyang
Donglian Qi
Yunfeng Yan
ArXivPDFHTML
Abstract

Human intelligence can retrieve any person according to both visual and language descriptions. However, the current computer vision community studies specific person re-identification (ReID) tasks in different scenarios separately, which limits the applications in the real world. This paper strives to resolve this problem by proposing a new instruct-ReID task that requires the model to retrieve images according to the given image or language instructions. Our instruct-ReID is a more general ReID setting, where existing 6 ReID tasks can be viewed as special cases by designing different instructions. We propose a large-scale OmniReID benchmark and an adaptive triplet loss as a baseline method to facilitate research in this new setting. Experimental results show that the proposed multi-purpose ReID model, trained on our OmniReID benchmark without fine-tuning, can improve +0.5%, +0.6%, +7.7% mAP on Market1501, MSMT17, CUHK03 for traditional ReID, +6.4%, +7.1%, +11.2% mAP on PRCC, VC-Clothes, LTCC for clothes-changing ReID, +11.7% mAP on COCAS+ real2 for clothes template based clothes-changing ReID when using only RGB images, +24.9% mAP on COCAS+ real2 for our newly defined language-instructed ReID, +4.3% on LLCM for visible-infrared ReID, +2.6% on CUHK-PEDES for text-to-image ReID. The datasets, the model, and code will be available atthis https URL.

View on arXiv
@article{he2025_2306.07520,
  title={ Instruct-ReID: A Multi-purpose Person Re-identification Task with Instructions },
  author={ Weizhen He and Yiheng Deng and Shixiang Tang and Qihao Chen and Qingsong Xie and Yizhou Wang and Lei Bai and Feng Zhu and Rui Zhao and Wanli Ouyang and Donglian Qi and Yunfeng Yan },
  journal={arXiv preprint arXiv:2306.07520},
  year={ 2025 }
}
Comments on this paper