ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.18076
132
0
v1v2 (latest)

Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching

29 May 2023
Tao Feng
Jie Zhang
Peizheng Wang
Zhijie Wang
    DD
ArXiv (abs)PDFHTML
Abstract

The expenses involved in training state-of-the-art deep hashing retrieval models have witnessed an increase due to the adoption of more sophisticated models and large-scale datasets. Dataset Distillation (DD) or Dataset Condensation(DC) focuses on generating smaller synthetic dataset that retains the original information. Nevertheless, existing DD methods face challenges in maintaining a trade-off between accuracy and efficiency. And the state-of-the-art dataset distillation methods can not expand to all deep hashing retrieval methods. In this paper, we propose an efficient condensation framework that addresses these limitations by matching the feature-embedding between synthetic set and real set. Furthermore, we enhance the diversity of features by incorporating the strategies of early-stage augmented models and multi-formation. Extensive experiments provide compelling evidence of the remarkable superiority of our approach, both in terms of performance and efficiency, compared to state-of-the-art baseline methods.

View on arXiv
@article{feng2025_2305.18076,
  title={ Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching },
  author={ Tao Feng and Jie Zhang and Huashan Liu and Zhijie Wang and Shengyuan Pang },
  journal={arXiv preprint arXiv:2305.18076},
  year={ 2025 }
}
Comments on this paper