- Journal Home
- Volume 43 - 2025
- Volume 42 - 2024
- Volume 41 - 2023
- Volume 40 - 2022
- Volume 39 - 2021
- Volume 38 - 2020
- Volume 37 - 2019
- Volume 36 - 2018
- Volume 35 - 2017
- Volume 34 - 2016
- Volume 33 - 2015
- Volume 32 - 2014
- Volume 31 - 2013
- Volume 30 - 2012
- Volume 29 - 2011
- Volume 28 - 2010
- Volume 27 - 2009
- Volume 26 - 2008
- Volume 25 - 2007
- Volume 24 - 2006
- Volume 23 - 2005
- Volume 22 - 2004
- Volume 21 - 2003
- Volume 20 - 2002
- Volume 19 - 2001
- Volume 18 - 2000
- Volume 17 - 1999
- Volume 16 - 1998
- Volume 15 - 1997
- Volume 14 - 1996
- Volume 13 - 1995
- Volume 12 - 1994
- Volume 11 - 1993
- Volume 10 - 1992
- Volume 9 - 1991
- Volume 8 - 1990
- Volume 7 - 1989
- Volume 6 - 1988
- Volume 5 - 1987
- Volume 4 - 1986
- Volume 3 - 1985
- Volume 2 - 1984
- Volume 1 - 1983
J. Comp. Math., 41 (2023), pp. 1281-1304.
Published online: 2023-11
Cited by
- BibTex
- RIS
- TXT
We propose a deep learning based discontinuous Galerkin method (D2GM) to solve hyperbolic equations with discontinuous solutions and random uncertainties. The main computational challenges for such problems include discontinuities of the solutions and the curse of dimensionality due to uncertainties. Deep learning techniques have been favored for high-dimensional problems but face difficulties when the solution is not smooth, thus have so far been mainly used for viscous hyperbolic system that admits only smooth solutions. We alleviate this difficulty by setting up the loss function using discrete shock capturing schemes – the discontinous Galerkin method as an example – since the solutions are smooth in the discrete space. The convergence of D2GM is established via the Lax equivalence theorem kind of argument. The high-dimensional random space is handled by the Monte-Carlo method. Such a setup makes the D2GM approximate high-dimensional functions over the random space with satisfactory accuracy at reasonable cost. The D2GM is found numerically to be first-order and second-order accurate for (stochastic) linear conservation law with smooth solutions using piecewise constant and piecewise linear basis functions, respectively. Numerous examples are given to verify the efficiency and the robustness of D2GM with the dimensionality of random variables up to 200 for (stochastic) linear conservation law and (stochastic) Burgers’ equation.
}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.2205-m2021-0277}, url = {http://global-sci.org/intro/article_detail/jcm/22112.html} }We propose a deep learning based discontinuous Galerkin method (D2GM) to solve hyperbolic equations with discontinuous solutions and random uncertainties. The main computational challenges for such problems include discontinuities of the solutions and the curse of dimensionality due to uncertainties. Deep learning techniques have been favored for high-dimensional problems but face difficulties when the solution is not smooth, thus have so far been mainly used for viscous hyperbolic system that admits only smooth solutions. We alleviate this difficulty by setting up the loss function using discrete shock capturing schemes – the discontinous Galerkin method as an example – since the solutions are smooth in the discrete space. The convergence of D2GM is established via the Lax equivalence theorem kind of argument. The high-dimensional random space is handled by the Monte-Carlo method. Such a setup makes the D2GM approximate high-dimensional functions over the random space with satisfactory accuracy at reasonable cost. The D2GM is found numerically to be first-order and second-order accurate for (stochastic) linear conservation law with smooth solutions using piecewise constant and piecewise linear basis functions, respectively. Numerous examples are given to verify the efficiency and the robustness of D2GM with the dimensionality of random variables up to 200 for (stochastic) linear conservation law and (stochastic) Burgers’ equation.