arrow
Volume 26, Issue 1
Convergence of Online Gradient Method with Penalty for BP Neural Networks

Hongmei Shao, Wei Wu & Lijun Liu

Commun. Math. Res., 26 (2010), pp. 67-75.

Published online: 2021-05

Export citation
  • Abstract

Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic convergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.

  • AMS Subject Headings

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{CMR-26-67, author = {Shao , HongmeiWu , Wei and Liu , Lijun}, title = {Convergence of Online Gradient Method with Penalty for BP Neural Networks}, journal = {Communications in Mathematical Research }, year = {2021}, volume = {26}, number = {1}, pages = {67--75}, abstract = {

Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic convergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.

}, issn = {2707-8523}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/cmr/19174.html} }
TY - JOUR T1 - Convergence of Online Gradient Method with Penalty for BP Neural Networks AU - Shao , Hongmei AU - Wu , Wei AU - Liu , Lijun JO - Communications in Mathematical Research VL - 1 SP - 67 EP - 75 PY - 2021 DA - 2021/05 SN - 26 DO - http://doi.org/ UR - https://global-sci.org/intro/article_detail/cmr/19174.html KW - convergence, online gradient method, penalty, monotonicity. AB -

Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic convergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.

Shao , HongmeiWu , Wei and Liu , Lijun. (2021). Convergence of Online Gradient Method with Penalty for BP Neural Networks. Communications in Mathematical Research . 26 (1). 67-75. doi:
Copy to clipboard
The citation has been copied to your clipboard