12
44

Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions

Abstract

We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonconvex functions. In particular, we study the class of Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions for which the chain rule of calculus holds. This class contains examples such as ReLU neural networks and others with non-differentiable activation functions. We first show that finding an ϵ\epsilon-stationary point with first-order methods is impossible in finite time. We then introduce the notion of (δ,ϵ)(\delta, \epsilon)-stationarity, which allows for an ϵ\epsilon-approximate gradient to be the convex combination of generalized gradients evaluated at points within distance δ\delta to the solution. We propose a series of randomized first-order methods and analyze their complexity of finding a (δ,ϵ)(\delta, \epsilon)-stationary point. Furthermore, we provide a lower bound and show that our stochastic algorithm has min-max optimal dependence on δ\delta. Empirically, our methods perform well for training ReLU neural networks.

View on arXiv
Comments on this paper