11
5

Regularized deep learning with nonconvex penalties

Abstract

Regularization methods are often employed in deep learning neural networks (DNNs) to prevent overfitting. For penalty based DNN regularization methods, convex penalties are typically considered because of their optimization guarantees. Recent theoretical work have shown that nonconvex penalties that satisfy certain regularity conditions are also guaranteed to perform well with standard optimization algorithms. In this paper, we examine new and currently existing nonconvex penalties for DNN regularization. We provide theoretical justifications for the new penalties and also assess the performance of all penalties with DNN analyses of seven datasets.

View on arXiv
Comments on this paper