69
56

Statistical inference on errorfully observed graphs

Abstract

Statistical inference on graphs is a burgeoning field in the applied and theoretical statistics communities, as well as throughout the wider world of science, engineering, business, etc. In many applications, we are faced with the reality of errorfully observed graphs. That is, the existence of an edge between two vertices is based on some imperfect assessment. In this paper, we consider a graph G=(V,E)G = (V,E). We wish to perform an inference task -- the inference task considered here is "vertex classification", i.e., given a vertex vv with unknown label Y(v)Y(v), we want to infer the label for vv based on the graph GG and the given labels for some set of vertices in GG not containing vv. However, we do not observe GG; rather, for each potential edge uv(V2)uv \in {{V}\choose{2}} we observe an "edge-feature" which we use to classify uvuv as edge/not-edge. Thus we {\it errorfully} observe GG when we observe the graph G~=(V,E~)\widetilde{G} = (V,\widetilde{E}) as the edges in E~\widetilde{E} arise from the classifications of the "edge-features", and are expected to be errorful. Moreover, we face a quantity/quality trade-off regarding the edge-features we observe -- more informative edge-features are more expensive, and hence the number of potential edges that can be assessed decreases with the quality of the edge-features. We studied this problem by formulating a quantity/quality tradeoff for a simple class of random graphs model, namely the stochastic blockmodel. We then consider a simple but optimal vertex classifier for classifying vv and we derive the optimal quantity/quality operating point for subsequent graph inference in the face of this trade-off. The results are surprising and suggest that the implications of the quantity/quality tradeoff is interesting and non-trivial.

View on arXiv
Comments on this paper