59
18

Algorithms for p\ell_p-based semi-supervised learning on graphs

Abstract

We develop fast algorithms for solving the variational and game-theoretic pp-Laplace equations on weighted graphs for p>2p>2. The graph pp-Laplacian for p>2p>2 has been proposed recently as a replacement for the standard (p=2p=2) graph Laplacian in semi-supervised learning problems with very few labels, where the minimizer of the graph Laplacian becomes degenerate. We present several efficient and scalable algorithms for both the variational and game-theoretic formulations, and present numerical results on synthetic data and real data that illustrate the effectiveness of the pp-Laplacian formulation for semi-supervised learning with few labels. We also prove new discrete to continuum convergence results for pp-Laplace problems on kk-nearest neighbor (kk-NN) graphs, which are more commonly used in practice than random geometric graphs. Our analysis shows that, on kk-NN graphs, the pp-Laplacian retains information about the data distribution as pp\to \infty and Lipschitz learning (p=p=\infty) is sensitive to the data distribution. This situation can be contrasted with random geometric graphs, where the pp-Laplacian \emph{forgets} the data distribution as pp\to \infty. Finally, we give a general framework for proving discrete to continuum convergence results in graph-based learning that only requires pointwise consistency and a type of monotonicity.

View on arXiv
Comments on this paper