arrow
Volume 14, Issue 4
Boundedness and Convergence Analysis of a Pi-Sigma Neural Network Based on Online Gradient Method and Sparse Optimization

Qinwei Fan, Le Liu, Shuai Zhao, Zhiwen Zhang, Xiaofei Yang, Zhiwei Xing & Xingshi He

East Asian J. Appl. Math., 14 (2024), pp. 769-787.

Published online: 2024-09

Export citation
  • Abstract

High order neural networks have strong nonlinear mapping ability, but the network structure is more complex, which restricts the efficiency of the network, and the relevant theoretical analysis is still not perfect up to now. To solve these problems, an online gradient learning algorithm model of Pi-Sigma neural network with a smooth set lasso regular term is proposed. Since the original lasso regular term contains absolute values and is not differentiable at the origin, it causes experiment oscillations and poses a great challenge to the convergence analysis of the algorithm. We use grinding technology to overcome this deficiency. The main contribution of this paper lies in the adoption of online learning algorithm, which effectively improves the efficiency of the algorithm. At the same time, strict theoretical proofs are presented, including strong convergence and weak convergence. Finally, the effectiveness of the algorithm and the correctness of the theoretical results are verified by numerical experiments.

  • AMS Subject Headings

65M10, 78A48

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{EAJAM-14-769, author = {Fan , QinweiLiu , LeZhao , ShuaiZhang , ZhiwenYang , XiaofeiXing , Zhiwei and He , Xingshi}, title = {Boundedness and Convergence Analysis of a Pi-Sigma Neural Network Based on Online Gradient Method and Sparse Optimization}, journal = {East Asian Journal on Applied Mathematics}, year = {2024}, volume = {14}, number = {4}, pages = {769--787}, abstract = {

High order neural networks have strong nonlinear mapping ability, but the network structure is more complex, which restricts the efficiency of the network, and the relevant theoretical analysis is still not perfect up to now. To solve these problems, an online gradient learning algorithm model of Pi-Sigma neural network with a smooth set lasso regular term is proposed. Since the original lasso regular term contains absolute values and is not differentiable at the origin, it causes experiment oscillations and poses a great challenge to the convergence analysis of the algorithm. We use grinding technology to overcome this deficiency. The main contribution of this paper lies in the adoption of online learning algorithm, which effectively improves the efficiency of the algorithm. At the same time, strict theoretical proofs are presented, including strong convergence and weak convergence. Finally, the effectiveness of the algorithm and the correctness of the theoretical results are verified by numerical experiments.

}, issn = {2079-7370}, doi = {https://doi.org/10.4208/eajam.2023-025.080823}, url = {http://global-sci.org/intro/article_detail/eajam/23437.html} }
TY - JOUR T1 - Boundedness and Convergence Analysis of a Pi-Sigma Neural Network Based on Online Gradient Method and Sparse Optimization AU - Fan , Qinwei AU - Liu , Le AU - Zhao , Shuai AU - Zhang , Zhiwen AU - Yang , Xiaofei AU - Xing , Zhiwei AU - He , Xingshi JO - East Asian Journal on Applied Mathematics VL - 4 SP - 769 EP - 787 PY - 2024 DA - 2024/09 SN - 14 DO - http://doi.org/10.4208/eajam.2023-025.080823 UR - https://global-sci.org/intro/article_detail/eajam/23437.html KW - Online gradient method, Pi-Sigma neural network, regularizer, convergence. AB -

High order neural networks have strong nonlinear mapping ability, but the network structure is more complex, which restricts the efficiency of the network, and the relevant theoretical analysis is still not perfect up to now. To solve these problems, an online gradient learning algorithm model of Pi-Sigma neural network with a smooth set lasso regular term is proposed. Since the original lasso regular term contains absolute values and is not differentiable at the origin, it causes experiment oscillations and poses a great challenge to the convergence analysis of the algorithm. We use grinding technology to overcome this deficiency. The main contribution of this paper lies in the adoption of online learning algorithm, which effectively improves the efficiency of the algorithm. At the same time, strict theoretical proofs are presented, including strong convergence and weak convergence. Finally, the effectiveness of the algorithm and the correctness of the theoretical results are verified by numerical experiments.

Qinwei Fan, Le Liu, Shuai Zhao, Zhiwen Zhang, Xiaofei Yang, Zhiwei Xing & Xingshi He. (2024). Boundedness and Convergence Analysis of a Pi-Sigma Neural Network Based on Online Gradient Method and Sparse Optimization. East Asian Journal on Applied Mathematics. 14 (4). 769-787. doi:10.4208/eajam.2023-025.080823
Copy to clipboard
The citation has been copied to your clipboard