622

Distributionally Robust kk-Nearest Neighbors for Few-Shot Learning

Abstract

Learning a robust classifier from a few samples remains a key challenge in machine learning. A major thrust of research in classification with few training samples has been based on metric learning to capture similarities between samples and then perform the kk-nearest neighbor algorithm. To make such an algorithm more robust, in this paper, we propose a distributionally robust kk-nearest neighbor algorithm Dr.k-NN, which features assigning minimax optimal weights to training samples when performing classification. We also couple it with neural-network-based feature embedding. We demonstrate the competitive performance of our algorithm comparing to the state-of-the-art in the few-training-sample setting with various real-data experiments.

View on arXiv
Comments on this paper