On hyper-parameter selection for guaranteed convergence of RMSProp

Cogn Neurodyn. 2024 Dec;18(6):3227-3237. doi: 10.1007/s11571-022-09845-8. Epub 2022 Jul 28.

Abstract

RMSProp is one of the most popular stochastic optimization algorithms in deep learning applications. However, recent work has pointed out that this method may not converge to the optimal solution even in simple convex settings. To this end, we propose a time-varying version of RMSProp to fix the non-convergence issues. Specifically, the hyperparameter, β t , is considered as a time-varying sequence rather than a fine-tuned constant. We also provide a rigorous proof that the RMSProp can converge to critical points even for smooth and non-convex objectives, with a convergence rate of order O ( log T / T ) . This provides a new understanding of RMSProp divergence, a common issue in practical applications. Finally, numerical experiments show that time-varying RMSProp exhibits advantages over standard RMSProp on benchmark datasets and support the theoretical results.

Keywords: Convergence; Deep learning; Neural networks; Non-convex optimization; RMSProp.