Algorithms for -based semi-supervised learning on graphs

We develop fast algorithms for solving the variational and game-theoretic -Laplace equations on weighted graphs for . The graph -Laplacian for has been proposed recently as a replacement for the standard () graph Laplacian in semi-supervised learning problems with very few labels, where the minimizer of the graph Laplacian becomes degenerate. We present several efficient and scalable algorithms for both the variational and game-theoretic formulations, and present numerical results on synthetic data and real data that illustrate the effectiveness of the -Laplacian formulation for semi-supervised learning with few labels. We also prove new discrete to continuum convergence results for -Laplace problems on -nearest neighbor (-NN) graphs, which are more commonly used in practice than random geometric graphs. Our analysis shows that, on -NN graphs, the -Laplacian retains information about the data distribution as and Lipschitz learning () is sensitive to the data distribution. This situation can be contrasted with random geometric graphs, where the -Laplacian \emph{forgets} the data distribution as . Finally, we give a general framework for proving discrete to continuum convergence results in graph-based learning that only requires pointwise consistency and a type of monotonicity.
View on arXiv