ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09487
83
0

Project-Probe-Aggregate: Efficient Fine-Tuning for Group Robustness

12 March 2025
B. Zhu
Jiequan Cui
H. Zhang
Chi Zhang
ArXivPDFHTML
Abstract

While image-text foundation models have succeeded across diverse downstream tasks, they still face challenges in the presence of spurious correlations between the input and label. To address this issue, we propose a simple three-step approach,Project-Probe-Aggregate (PPA), that enables parameter-efficient fine-tuning for foundation models without relying on group annotations. Building upon the failure-based debiasing scheme, our method, PPA, improves its two key components: minority samples identification and the robust training algorithm. Specifically, we first train biased classifiers by projecting image features onto the nullspace of class proxies from text encoders. Next, we infer group labels using the biased classifier and probe group targets with prior correction. Finally, we aggregate group weights of each class to produce the debiased classifier. Our theoretical analysis shows that our PPA enhances minority group identification and is Bayes optimal for minimizing the balanced group error, mitigating spurious correlations. Extensive experimental results confirm the effectiveness of our PPA: it outperforms the state-of-the-art by an average worst-group accuracy while requiring less than 0.01% tunable parameters without training group labels.

View on arXiv
@article{zhu2025_2503.09487,
  title={ Project-Probe-Aggregate: Efficient Fine-Tuning for Group Robustness },
  author={ Beier Zhu and Jiequan Cui and Hanwang Zhang and Chi Zhang },
  journal={arXiv preprint arXiv:2503.09487},
  year={ 2025 }
}
Comments on this paper