90
0

Interpretable Few-Shot Image Classification via Prototypical Concept-Guided Mixture of LoRA Experts

Abstract

Self-Explainable Models (SEMs) rely on Prototypical Concept Learning (PCL) to enable their visual recognition processes more interpretable, but they often struggle in data-scarce settings where insufficient training samples lead to suboptimalthis http URLaddress this limitation, we propose a Few-Shot Prototypical Concept Classification (FSPCC) framework that systematically mitigates two key challenges under low-data regimes: parametric imbalance and representation misalignment. Specifically, our approach leverages a Mixture of LoRA Experts (MoLE) for parameter-efficient adaptation, ensuring a balanced allocation of trainable parameters between the backbone and the PCLthis http URL, cross-module concept guidance enforces tight alignment between the backbone's feature representations and the prototypical concept activationthis http URLaddition, we incorporate a multi-level feature preservation strategy that fuses spatial and semantic cues across various layers, thereby enriching the learned representations and mitigating the challenges posed by limited datathis http URL, to enhance interpretability and minimize concept overlap, we introduce a geometry-aware concept discrimination loss that enforces orthogonality among concepts, encouraging more disentangled and transparent decisionthis http URLresults on six popular benchmarks (CUB-200-2011, mini-ImageNet, CIFAR-FS, Stanford Cars, FGVC-Aircraft, and DTD) demonstrate that our approach consistently outperforms existing SEMs by a notable margin, with 4.2%-8.7% relative gains in 5-way 5-shotthis http URLfindings highlight the efficacy of coupling concept learning with few-shot adaptation to achieve both higher accuracy and clearer model interpretability, paving the way for more transparent visual recognition systems.

View on arXiv
@article{ji2025_2506.04673,
  title={ Interpretable Few-Shot Image Classification via Prototypical Concept-Guided Mixture of LoRA Experts },
  author={ Zhong Ji and Rongshuai Wei and Jingren Liu and Yanwei Pang and Jungong Han },
  journal={arXiv preprint arXiv:2506.04673},
  year={ 2025 }
}
Comments on this paper