ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.10972
77
14
v1v2 (latest)

How Robust are Discriminatively Trained Zero-Shot Learning Models?

26 January 2022
M. K. Yucel
R. G. Cinbis
Pinar Duygulu
ArXiv (abs)PDFHTML
Abstract

Data shift robustness is an active research topic, however, it has been primarily investigated from a fully supervised perspective, and robustness of zero-shot learning (ZSL) models have been largely neglected. In this paper, we present a novel analysis on the robustness of discriminative ZSL to image corruptions. We leverage the well-known label embedding model and subject it to a large set of common corruptions and defenses. In order to realize the corruption analysis, we curate and release the first ZSL corruption robustness datasets SUN-C, CUB-C and AWA2-C. We analyse our results by taking into account the dataset characteristics, class imbalance, class transition trends between seen and unseen classes and the discrepancies between ZSL and GZSL performances. Our results show that discriminative ZSL suffer from corruptions and this trend is further exacerbated by the severe class imbalance and model weakness inherent in ZSL methods. We then combine our findings with those based on adversarial attacks in ZSL, and highlight the different effects of corruptions and adversarial examples, such as the pseudo-robustness effect present under adversarial attacks. We also obtain new strong baselines for the label embedding model with certain corruption robustness enhancement methods. Finally, our experiments show that although existing methods to improve robustness somewhat work for ZSL models, they do not produce a tangible effect.

View on arXiv
Comments on this paper