In this paper, a novel recurrent sigma‒sigma neural network (RSPSNN) that contains the same advantages as the higher-order and recurrent neural networks is proposed. The batch gradient algorithm is used to train the RSPSNN to search for the optimal weights based on the minimal mean squared error (MSE). To substantiate the unique equilibrium state of the RSPSNN, the characteristic of stability convergence is proven, which is one of the most significant indices for reflecting the effectiveness and overcoming the instability problem in the training of this network. Finally, to establish a more precise evaluation of its validity, five empirical experiments are used. The RSPSNN is successfully applied to the function approximation problem, prediction problem, parity problem, classification problem, and image simulation, which verifies its effectiveness and practicability.
© 2024. The Author(s).