J. Nonl. Mod. Anal., 6 (2024), pp. 485-513.
Published online: 2024-06
[An open-access article; the PDF is free to any online user.]
Cited by
- BibTex
- RIS
- TXT
There are several main challenges in solving nonlinear differential equations with artificial neural networks (ANNs), such as a nonlinear system’s sensitivity to its initial values, discretization, and strategies for incorporating physics-based information into ANNs. As for the first issue, this paper addresses the initial value problems of nonlinear dynamical systems (a Duffing oscillator and a Burger’s equation), which cause large global truncation errors in sub-domains with a significant reduction in the influence of initial constraints, using meta-learning-based physics-informed neural networks (MPINNs). The MPINNs with dual learners outperform physics-informed neural networks with a single learner (no fine reinitialization capability). As a result, the former approach improves solution convergence by 98.83% in the sub-time domain (III) of a Duffing oscillator, and by 85.89% at $t = 45$ in a Burger’s equation problem, compared to the latter one. Model accuracy is highly dependent on the adaptability of the initial parameters in the first hidden layers of the meta-models. From correlation analyses, it is obvious that the parameters become less (the Duffing oscillator) or more (the Burger’s equation) correlated during fine reinitialization, as the update manner differs or is similar to the one used in pre-initialization. In the first example, the MPINN achieves both the mitigation of model sensitivity to its output and the improvement of model accuracy. Conversely, the second example shows that the proposed approach is not enough to solve both issues simultaneously, as increased model sensitivity to its output leads to higher model accuracy. The application of transfer learning reduces the number of iterative pre-meta-trainings.
}, issn = {2562-2862}, doi = {https://doi.org/10.12150/jnma.2024.485}, url = {http://global-sci.org/intro/article_detail/jnma/23187.html} }There are several main challenges in solving nonlinear differential equations with artificial neural networks (ANNs), such as a nonlinear system’s sensitivity to its initial values, discretization, and strategies for incorporating physics-based information into ANNs. As for the first issue, this paper addresses the initial value problems of nonlinear dynamical systems (a Duffing oscillator and a Burger’s equation), which cause large global truncation errors in sub-domains with a significant reduction in the influence of initial constraints, using meta-learning-based physics-informed neural networks (MPINNs). The MPINNs with dual learners outperform physics-informed neural networks with a single learner (no fine reinitialization capability). As a result, the former approach improves solution convergence by 98.83% in the sub-time domain (III) of a Duffing oscillator, and by 85.89% at $t = 45$ in a Burger’s equation problem, compared to the latter one. Model accuracy is highly dependent on the adaptability of the initial parameters in the first hidden layers of the meta-models. From correlation analyses, it is obvious that the parameters become less (the Duffing oscillator) or more (the Burger’s equation) correlated during fine reinitialization, as the update manner differs or is similar to the one used in pre-initialization. In the first example, the MPINN achieves both the mitigation of model sensitivity to its output and the improvement of model accuracy. Conversely, the second example shows that the proposed approach is not enough to solve both issues simultaneously, as increased model sensitivity to its output leads to higher model accuracy. The application of transfer learning reduces the number of iterative pre-meta-trainings.