- Journal Home
- Volume 22 - 2025
- Volume 21 - 2024
- Volume 20 - 2023
- Volume 19 - 2022
- Volume 18 - 2021
- Volume 17 - 2020
- Volume 16 - 2019
- Volume 15 - 2018
- Volume 14 - 2017
- Volume 13 - 2016
- Volume 12 - 2015
- Volume 11 - 2014
- Volume 10 - 2013
- Volume 9 - 2012
- Volume 8 - 2011
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2008
- Volume 4 - 2007
- Volume 3 - 2006
- Volume 2 - 2005
- Volume 1 - 2004
Cited by
- BibTex
- RIS
- TXT
In this paper we propose a Deep Learning Galerkin Method (DGM) based on the deep neural network learning algorithm to approximate the general second-order linear elliptic problem. This method is a combination of Galerkin Method and machine learning. The DGM uses the deep neural network instead of the linear combination of basis functions. Our algorithm is meshfree and we train the neural network by randomly sampling the space points and using the gradient descent algorithm to satisfy the differential operators and boundary conditions. Moreover, the approximate ability of a neural networks' solution to the exact solution is proved by the convergence of the loss function and the convergence of the neural network to the exact solution in $L^2$ norm under certain conditions. Finally, some numerical experiments reflect the approximation ability of the neural networks intuitively.
}, issn = {2617-8710}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/ijnam/19114.html} }In this paper we propose a Deep Learning Galerkin Method (DGM) based on the deep neural network learning algorithm to approximate the general second-order linear elliptic problem. This method is a combination of Galerkin Method and machine learning. The DGM uses the deep neural network instead of the linear combination of basis functions. Our algorithm is meshfree and we train the neural network by randomly sampling the space points and using the gradient descent algorithm to satisfy the differential operators and boundary conditions. Moreover, the approximate ability of a neural networks' solution to the exact solution is proved by the convergence of the loss function and the convergence of the neural network to the exact solution in $L^2$ norm under certain conditions. Finally, some numerical experiments reflect the approximation ability of the neural networks intuitively.