Extending Universal Approximation Guarantees: A Theoretical Justification for the Continuity of Real-World Learning Tasks

Universal Approximation Theorems establish the density of various classes of neural network function approximators in , where is compact. In this paper, we aim to extend these guarantees by establishing conditions on learning tasks that guarantee their continuity. We consider learning tasks given by conditional expectations , where the learning target is a potentially pathological transformation of some underlying data-generating process . Under a factorization for the data-generating process where is thought of as a deterministic map acting on some random input , we establish conditions (that might be easily verified using knowledge of alone) that guarantee the continuity of practically \textit{any} derived learning task . We motivate the realism of our conditions using the example of randomized stable matching, thus providing a theoretical justification for the continuity of real-world learning tasks.
View on arXiv