Inference for determinantal point processes without spectral knowledge

Determinantal point processes (DPPs) are point process models that naturally encode diversity between the points of a given realization, through a positive definite kernel . DPPs possess desirable properties, such as exact sampling or analyticity of the moments, but learning the parameters of kernel through likelihood-based inference is not straightforward. First, the kernel that appears in the likelihood is not , but another kernel related to through an often intractable spectral decomposition. This issue is typically bypassed in machine learning by directly parametrizing the kernel , at the price of some interpretability of the model parameters. We follow this approach here. Second, the likelihood has an intractable normalizing constant, which takes the form of a large determinant in the case of a DPP over a finite set of objects, and the form of a Fredholm determinant in the case of a DPP over a continuous domain. Our main contribution is to derive bounds on the likelihood of a DPP, both for finite and continuous domains. Unlike previous work, our bounds are cheap to evaluate since they do not rely on approximating the spectrum of a large matrix or an operator. Through usual arguments, these bounds thus yield cheap variational inference and moderately expensive exact Markov chain Monte Carlo inference methods for DPPs.
View on arXiv