ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01203
42
50

Stochastic Separation Theorems

3 March 2017
A. N. Gorban
I. Tyukin
ArXivPDFHTML
Abstract

The problem of non-iterative one-shot and non-destructive correction of unavoidable mistakes arises in all Artificial Intelligence applications in the real world. Its solution requires robust separation of samples with errors from samples where the system works properly. We demonstrate that in (moderately) high dimension this separation could be achieved with probability close to one by linear discriminants. Surprisingly, separation of a new image from a very large set of known images is almost always possible even in moderately high dimensions by linear functionals, and coefficients of these functionals can be found explicitly. Based on fundamental properties of measure concentration, we show that for M<aexp⁡(bn)M<a\exp(b{n})M<aexp(bn) random MMM-element sets in Rn\mathbb{R}^nRn are linearly separable with probability ppp, p>1−ϑp>1-\varthetap>1−ϑ, where 1>ϑ>01>\vartheta>01>ϑ>0 is a given small constant. Exact values of a,b>0a,b>0a,b>0 depend on the probability distribution that determines how the random MMM-element sets are drawn, and on the constant ϑ\varthetaϑ. These {\em stochastic separation theorems} provide a new instrument for the development, analysis, and assessment of machine learning methods and algorithms in high dimension. Theoretical statements are illustrated with numerical examples.

View on arXiv
Comments on this paper