18
0

Extending Universal Approximation Guarantees: A Theoretical Justification for the Continuity of Real-World Learning Tasks

Abstract

Universal Approximation Theorems establish the density of various classes of neural network function approximators in C(K,Rm)C(K, \mathbb{R}^m), where KRnK \subset \mathbb{R}^n is compact. In this paper, we aim to extend these guarantees by establishing conditions on learning tasks that guarantee their continuity. We consider learning tasks given by conditional expectations xE[YX=x]x \mapsto \mathrm{E}\left[Y \mid X = x\right], where the learning target Y=fLY = f \circ L is a potentially pathological transformation of some underlying data-generating process LL. Under a factorization L=TWL = T \circ W for the data-generating process where TT is thought of as a deterministic map acting on some random input WW, we establish conditions (that might be easily verified using knowledge of TT alone) that guarantee the continuity of practically \textit{any} derived learning task xE[fLX=x]x \mapsto \mathrm{E}\left[f \circ L \mid X = x\right]. We motivate the realism of our conditions using the example of randomized stable matching, thus providing a theoretical justification for the continuity of real-world learning tasks.

View on arXiv
Comments on this paper