Aligned Contrastive Loss for Long-Tailed Recognition

In this paper, we propose an Aligned Contrastive Learning (ACL) algorithm to address the long-tailed recognition problem. Our findings indicate that while multi-view training boosts the performance, contrastive learning does not consistently enhance model generalization as the number of views increases. Through theoretical gradient analysis of supervised contrastive learning (SCL), we identify gradient conflicts, and imbalanced attraction and repulsion gradients between positive and negative pairs as the underlying issues. Our ACL algorithm is designed to eliminate these problems and demonstrates strong performance across multiple benchmarks. We validate the effectiveness of ACL through experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist datasets. Results show that ACL achieves new state-of-the-art performance.
View on arXiv@article{ma2025_2506.01071, title={ Aligned Contrastive Loss for Long-Tailed Recognition }, author={ Jiali Ma and Jiequan Cui and Maeno Kazuki and Lakshmi Subramanian and Karlekar Jayashree and Sugiri Pranata and Hanwang Zhang }, journal={arXiv preprint arXiv:2506.01071}, year={ 2025 } }