Information-Theoretic Bounds and Approximations in Neural Population Coding

For practical applications of the information theory, it is often difficult to calculate Shannon's mutual information accurately for high-dimensional variables because of the curse of dimensionality. This paper is focused on effective approximation methods for evaluating mutual information in the context of neural population coding. We derive several information-theoretic asymptotic bounds and approximation formulas for large but finite neural populations that remain valid in high-dimensional spaces. We prove that optimizing the population density distribution based on these approximation formulas is a convex optimization problem which allows efficient numerical solutions. We also discuss variable transformation and dimensionality reduction techniques for computation of the approximations. Numerical simulation results confirmed that our asymptotic formulas were highly accurate for approximating mutual information for large neural populations. In special cases, the approximation formulas are exactly equal to the true mutual information. Since mutual information has wide spread applications in various disciplines, the general asymptotic formulas established here may potentially apply to other related problems.
View on arXiv