Volume 16, Issue 3
Convergence of BP Algorithm for Training MLP with Linear Output

H. M. Shao, W. Wu & W. B. Liu

Numer. Math. J. Chinese Univ. (English Ser.)(English Ser.) 16 (2007), pp. 193-202

Published online: 2007-08

Export citation
  • Abstract
The capability of multilayer perceptrons (MLPs) for approximating continuous functions with arbitrary accuracy has been demonstrated in the past decades. Back propagation $($BP$)$ algorithm is the most popular learning algorithm for training of MLPs. In this paper, a simple iteration formula is used to select the learning rate for each cycle of training procedure, and a convergence result is presented for the BP algorithm for training MLP with a hidden layer and a linear output unit. The monotonicity of the error function is also guaranteed during the training iteration.
  • Keywords

  • AMS Subject Headings

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{NM-16-193, author = { H. M. Shao, W. Wu and W. B. Liu}, title = {Convergence of BP Algorithm for Training MLP with Linear Output}, journal = {Numerical Mathematics, a Journal of Chinese Universities}, year = {2007}, volume = {16}, number = {3}, pages = {193--202}, abstract = { The capability of multilayer perceptrons (MLPs) for approximating continuous functions with arbitrary accuracy has been demonstrated in the past decades. Back propagation $($BP$)$ algorithm is the most popular learning algorithm for training of MLPs. In this paper, a simple iteration formula is used to select the learning rate for each cycle of training procedure, and a convergence result is presented for the BP algorithm for training MLP with a hidden layer and a linear output unit. The monotonicity of the error function is also guaranteed during the training iteration.}, issn = {}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/nm/8053.html} }
TY - JOUR T1 - Convergence of BP Algorithm for Training MLP with Linear Output AU - H. M. Shao, W. Wu & W. B. Liu JO - Numerical Mathematics, a Journal of Chinese Universities VL - 3 SP - 193 EP - 202 PY - 2007 DA - 2007/08 SN - 16 DO - http://doi.org/ UR - https://global-sci.org/intro/article_detail/nm/8053.html KW - AB - The capability of multilayer perceptrons (MLPs) for approximating continuous functions with arbitrary accuracy has been demonstrated in the past decades. Back propagation $($BP$)$ algorithm is the most popular learning algorithm for training of MLPs. In this paper, a simple iteration formula is used to select the learning rate for each cycle of training procedure, and a convergence result is presented for the BP algorithm for training MLP with a hidden layer and a linear output unit. The monotonicity of the error function is also guaranteed during the training iteration.
H. M. Shao, W. Wu and W. B. Liu. (2007). Convergence of BP Algorithm for Training MLP with Linear Output. Numerical Mathematics, a Journal of Chinese Universities. 16 (3). 193-202. doi:
Copy to clipboard
The citation has been copied to your clipboard