414

AdaTerm: Adaptive T-Distribution Estimated Robust Moments towards Noise-Robust Stochastic Gradient Optimizer

Neurocomputing (Neurocomputing), 2022
Main:14 Pages
17 Figures
Bibliography:2 Pages
6 Tables
Appendix:11 Pages
Abstract

With deep learning applications becoming more practical, practitioners are inevitably faced with datasets corrupted by a variety of noise such as measurement errors, mislabeling and estimated surrogate inputs/outputs, which can have negative impacts on the optimization results. As a safety net, it is natural to improve the robustness to noise of the optimization algorithm which updates the network parameters in the final process of learning. Previous works revealed that the first momentum used in Adam-like stochastic gradient descent optimizers can be modified based on the Student's t-distribution to produce updates robust to noise. In this paper, we propose AdaTerm which derives not only the first momentum but also all of the involved statistics based on the Student's t-distribution, providing for the first time a unified treatment of the optimization process under the t-distribution statistical model. When the computed gradients statistically appear to be aberrant, AdaTerm excludes them from the update and reinforce its robustness for subsequent updates; otherwise, it normally updates the network parameters and relaxes its robustness for the following updates. With this noise-adaptive behavior, AdaTerm's excellent learning performance was confirmed via typical optimization problems with several cases where the noise ratio is different and/or unknown. In addition, we proved a new general trick for deriving a theoretical regret bound without AMSGrad.

View on arXiv
Comments on this paper