ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18132
39
0
v1v2v3 (latest)

BiggerGait: Unlocking Gait Recognition with Layer-wise Representations from Large Vision Models

23 May 2025
Dingqing Ye
Chao Fan
Zhanbo Huang
Chengwen Luo
Jianqiang Li
Shiqi Yu
Xiaoming Liu
    CVBMVLM
ArXiv (abs)PDFHTML
Main:10 Pages
6 Figures
Bibliography:4 Pages
2 Tables
Abstract

Large vision models (LVM) based gait recognition has achieved impressive performance. However, existing LVM-based approaches may overemphasize gait priors while neglecting the intrinsic value of LVM itself, particularly the rich, distinct representations across its multi-layers. To adequately unlock LVM's potential, this work investigates the impact of layer-wise representations on downstream recognition tasks. Our analysis reveals that LVM's intermediate layers offer complementary properties across tasks, integrating them yields an impressive improvement even without rich well-designed gait priors. Building on this insight, we propose a simple and universal baseline for LVM-based gait recognition, termed BiggerGait. Comprehensive evaluations on CCPG, CAISA-B*, SUSTech1K, and CCGR\_MINI validate the superiority of BiggerGait across both within- and cross-domain tasks, establishing it as a simple yet practical baseline for gait representation learning. All the models and code will be publicly available.

View on arXiv
@article{ye2025_2505.18132,
  title={ BiggerGait: Unlocking Gait Recognition with Layer-wise Representations from Large Vision Models },
  author={ Dingqiang Ye and Chao Fan and Zhanbo Huang and Chengwen Luo and Jianqiang Li and Shiqi Yu and Xiaoming Liu },
  journal={arXiv preprint arXiv:2505.18132},
  year={ 2025 }
}
Comments on this paper