495
10

Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions

Abstract

Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for nn data points (each of dimension dd) and a nonconvex sparsity penalty. We prove that finding an O(nc1dc2)\mathcal{O}(n^{c_1}d^{c_2})-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any c1,c2[0,1)c_1, c_2\in [0,1) such that c1+c2<1c_1+c_2<1. The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P == NP.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.