ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23883
46
2

BioCLIP 2: Emergent Properties from Scaling Hierarchical Contrastive Learning

29 May 2025
Jianyang Gu
Samuel Stevens
Elizabeth G. Campolongo
Matthew J. Thompson
Net Zhang
Jiaman Wu
Andrei Kopanev
Zheda Mai
Alexander E. White
James P. Balhoff
Wasila Dahdul
Daniel Rubenstein
Hilmar Lapp
T. Berger-Wolf
Wei-Lun Chao
Yu-Chuan Su
    VLM
ArXiv (abs)PDFHTML
Main:9 Pages
16 Figures
Bibliography:6 Pages
8 Tables
Appendix:14 Pages
Abstract

Foundation models trained at scale exhibit remarkable emergent behaviors, learning new capabilities beyond their initial training objectives. We find such emergent behaviors in biological vision models via large-scale contrastive vision-language training. To achieve this, we first curate TreeOfLife-200M, comprising 214 million images of living organisms, the largest and most diverse biological organism image dataset to date. We then train BioCLIP 2 on TreeOfLife-200M to distinguish different species. Despite the narrow training objective, BioCLIP 2 yields extraordinary accuracy when applied to various biological visual tasks such as habitat classification and trait prediction. We identify emergent properties in the learned embedding space of BioCLIP 2. At the inter-species level, the embedding distribution of different species aligns closely with functional and ecological meanings (e.g., beak sizes and habitats). At the intra-species level, instead of being diminished, the intra-species variations (e.g., life stages and sexes) are preserved and better separated in subspaces orthogonal to inter-species distinctions. We provide formal proof and analyses to explain why hierarchical supervision and contrastive objectives encourage these emergent properties. Crucially, our results reveal that these properties become increasingly significant with larger-scale training data, leading to a biologically meaningful embedding space.

View on arXiv
@article{gu2025_2505.23883,
  title={ BioCLIP 2: Emergent Properties from Scaling Hierarchical Contrastive Learning },
  author={ Jianyang Gu and Samuel Stevens and Elizabeth G Campolongo and Matthew J Thompson and Net Zhang and Jiaman Wu and Andrei Kopanev and Zheda Mai and Alexander E. White and James Balhoff and Wasila Dahdul and Daniel Rubenstein and Hilmar Lapp and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su },
  journal={arXiv preprint arXiv:2505.23883},
  year={ 2025 }
}
Comments on this paper