Volume 16, Issue 3
Convergence of BP Algorithm for Training MLP with Linear Output

H. M. Shao and W. Wu & W. B. Liu

Numer. Math. J. Chinese Univ. (English Ser.)(English Ser.) 16 (2007), pp. 193-202

Preview Full PDF BiBTex 1 361
  • Abstract

The capability of multilayer perceptrons (MLPs) for approximating continuous functions with arbitrary accuracy has been demonstrated in the past decades. Back propagation $($BP$)$ algorithm is the most popular learning algorithm for training of MLPs. In this paper, a simple iteration formula is used to select the learning rate for each cycle of training procedure, and a convergence result is presented for the BP algorithm for training MLP with a hidden layer and a linear output unit. The monotonicity of the error function is also guaranteed during the training iteration.

  • History

Published online: 2007-08

  • Keywords

  • AMS Subject Headings

  • Cited by