Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions

We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonconvex functions. In particular, we study the class of Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions for which the chain rule of calculus holds. This class contains examples such as ReLU neural networks and others with non-differentiable activation functions. We first show that finding an -stationary point with first-order methods is impossible in finite time. We then introduce the notion of -stationarity, which allows for an -approximate gradient to be the convex combination of generalized gradients evaluated at points within distance to the solution. We propose a series of randomized first-order methods and analyze their complexity of finding a -stationary point. Furthermore, we provide a lower bound and show that our stochastic algorithm has min-max optimal dependence on . Empirically, our methods perform well for training ReLU neural networks.
View on arXiv