Cited by
- BibTex
- RIS
- TXT
In this paper, we present an improved analysis of the Physics Informed Neural Networks (PINNs) method for solving second-order elliptic equations. By assuming an intrinsic sparse structure in the underlying solution, we provide a convergence rate analysis that can overcome the curse of dimensionality (CoD). Specifically, using some approximation theory in Sobolev space together with the multivariate Faa di Bruno formula, we first derive the approximation error for composition functions with a small degree of freedom in each compositional layer. Furthermore, by integrating several results on the statistical error of neural networks, we obtain a refined convergence rate analysis for PINNs in solving elliptic equations with compositional solutions. We also demonstrate the benefits of the intrinsic sparse structure with two simple numerical examples.
}, issn = {}, doi = {https://doi.org/ 10.4208/aam.OA-2023-0021}, url = {http://global-sci.org/intro/article_detail/aam/21993.html} }In this paper, we present an improved analysis of the Physics Informed Neural Networks (PINNs) method for solving second-order elliptic equations. By assuming an intrinsic sparse structure in the underlying solution, we provide a convergence rate analysis that can overcome the curse of dimensionality (CoD). Specifically, using some approximation theory in Sobolev space together with the multivariate Faa di Bruno formula, we first derive the approximation error for composition functions with a small degree of freedom in each compositional layer. Furthermore, by integrating several results on the statistical error of neural networks, we obtain a refined convergence rate analysis for PINNs in solving elliptic equations with compositional solutions. We also demonstrate the benefits of the intrinsic sparse structure with two simple numerical examples.