20
14

Information-Theoretic Bounds and Approximations in Neural Population Coding

Abstract

Information theory is a powerful tool for neuroscience and other disciplines. Efficient calculation of Shannon's mutual information (MI) is a key computational step that often presents the biggest difficulty for practical applications. In this paper, we propose effective approximation methods for evaluating MI in the context of neural population coding, especially for high-dimensional inputs. We prove several information-theoretic asymptotic bounds and approximation formulas for large size neural populations. We also discuss how optimization of neural population coding based on these approximation formulas leads to a convex problem about the population density distribution in neural population parameter space. Several useful techniques, including variable transformation and dimensionality reduction, are proposed for more efficient computation of the approximations. Our numerical simulation results show that our asymptotic formulas are highly accurate for approximating MI in neural populations. For some special cases, the approximations by our formulas are exactly equal to the true MI. The approximation methods for MI may have a wide range of applications in various disciplines.

View on arXiv
Comments on this paper