75
0

Feasibility with Language Models for Open-World Compositional Zero-Shot Learning

Main:4 Pages
6 Figures
Bibliography:2 Pages
6 Tables
Appendix:5 Pages
Abstract

Humans can easily tell if an attribute (also called state) is realistic, i.e., feasible, for an object, e.g. fire can be hot, but it cannot be wet. In Open-World Compositional Zero-Shot Learning, when all possible state-object combinations are considered as unseen classes, zero-shot predictors tend to perform poorly. Our work focuses on using external auxiliary knowledge to determine the feasibility of state-object combinations. Our Feasibility with Language Model (FLM) is a simple and effective approach that leverages Large Language Models (LLMs) to better comprehend the semantic relationships between states and objects. FLM involves querying an LLM about the feasibility of a given pair and retrieving the output logit for the positive answer. To mitigate potential misguidance of the LLM given that many of the state-object compositions are rare or completely infeasible, we observe that the in-context learning ability of LLMs is essential. We present an extensive study identifying Vicuna and ChatGPT as best performing, and we demonstrate that our FLM consistently improves OW-CZSL performance across all three benchmarks.

View on arXiv
@article{kim2025_2505.11181,
  title={ Feasibility with Language Models for Open-World Compositional Zero-Shot Learning },
  author={ Jae Myung Kim and Stephan Alaniz and Cordelia Schmid and Zeynep Akata },
  journal={arXiv preprint arXiv:2505.11181},
  year={ 2025 }
}
Comments on this paper