TY - JOUR T1 - Convergence of BP Algorithm for Training MLP with Linear Output AU - H. M. Shao, W. Wu & W. B. Liu JO - Numerical Mathematics, a Journal of Chinese Universities VL - 3 SP - 193 EP - 202 PY - 2007 DA - 2007/08 SN - 16 DO - http://doi.org/ UR - https://global-sci.org/intro/article_detail/nm/8053.html KW - AB - The capability of multilayer perceptrons (MLPs) for approximating continuous functions with arbitrary accuracy has been demonstrated in the past decades. Back propagation $($BP$)$ algorithm is the most popular learning algorithm for training of MLPs. In this paper, a simple iteration formula is used to select the learning rate for each cycle of training procedure, and a convergence result is presented for the BP algorithm for training MLP with a hidden layer and a linear output unit. The monotonicity of the error function is also guaranteed during the training iteration.