85
44

Optimal Algorithms for Ridge and Lasso Regression with Partially Observed Attributes

Abstract

We consider the problems of ridge (L2-regularized) and lasso (L1-regularized) linear regression in a partial-information setting, in which the learner is allowed to observe only a fixed number of attributes of each example at training time. We present simple and efficient algorithms for both problems, that are optimal (up to logarithmic factors) in the sense that they require to observe the same number of attributes as do full-information algorithms. By that, we answer an open problem recently posed by Cesa-Bianchi et al. (2010), and show their lower bound to be tight.

View on arXiv
Comments on this paper