ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.05520
36
13

Entanglement Entropy of Target Functions for Image Classification and Convolutional Neural Network

16 October 2017
Ya-Hui Zhang
ArXivPDFHTML
Abstract

The success of deep convolutional neural network (CNN) in computer vision especially image classification problems requests a new information theory for function of image, instead of image itself. In this article, after establishing a deep mathematical connection between image classification problem and quantum spin model, we propose to use entanglement entropy, a generalization of classical Boltzmann-Shannon entropy, as a powerful tool to characterize the information needed for representation of general function of image. We prove that there is a sub-volume-law bound for entanglement entropy of target functions of reasonable image classification problems. Therefore target functions of image classification only occupy a small subspace of the whole Hilbert space. As a result, a neural network with polynomial number of parameters is efficient for representation of such target functions of image. The concept of entanglement entropy can also be useful to characterize the expressive power of different neural networks. For example, we show that to maintain the same expressive power, number of channels DDD in a convolutional neural network should scale with the number of convolution layers ncn_cnc​ as D∼D01ncD\sim D_0^{\frac{1}{n_c}}D∼D0nc​1​​. Therefore, deeper CNN with large ncn_cnc​ is more efficient than shallow ones.

View on arXiv
Comments on this paper