42
28

On aggregation for heavy-tailed classes

Abstract

We introduce an alternative to the notion of `fast rate' in Learning Theory, which coincides with the optimal error rate when the given class happens to be convex and regular in some sense. While it is well known that such a rate cannot always be attained by a learning procedure (i.e., a procedure that selects a function in the given class), we introduce an aggregation procedure that attains that rate under rather minimal assumptions -- for example, that the LqL_q and L2L_2 norms are equivalent on the linear span of the class for some q>2q>2, and the target random variable is square-integrable.

View on arXiv
Comments on this paper