- Journal Home
- Volume 36 - 2024
- Volume 35 - 2024
- Volume 34 - 2023
- Volume 33 - 2023
- Volume 32 - 2022
- Volume 31 - 2022
- Volume 30 - 2021
- Volume 29 - 2021
- Volume 28 - 2020
- Volume 27 - 2020
- Volume 26 - 2019
- Volume 25 - 2019
- Volume 24 - 2018
- Volume 23 - 2018
- Volume 22 - 2017
- Volume 21 - 2017
- Volume 20 - 2016
- Volume 19 - 2016
- Volume 18 - 2015
- Volume 17 - 2015
- Volume 16 - 2014
- Volume 15 - 2014
- Volume 14 - 2013
- Volume 13 - 2013
- Volume 12 - 2012
- Volume 11 - 2012
- Volume 10 - 2011
- Volume 9 - 2011
- Volume 8 - 2010
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2009
- Volume 4 - 2008
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
Commun. Comput. Phys., 30 (2021), pp. 635-665.
Published online: 2021-07
Cited by
- BibTex
- RIS
- TXT
Generative adversarial networks (GANs) were initially proposed to generate images by learning from a large number of samples. Recently, GANs have been used to emulate complex physical systems such as turbulent flows. However, a critical question must be answered before GANs can be considered trusted emulators for physical systems: do GANs-generated samples conform to the various physical constraints? These include both deterministic constraints (e.g., conservation laws) and statistical constraints (e.g., energy spectrum of turbulent flows). The latter have been studied in a companion paper (Wu et al., Enforcing statistical constraints in generative adversarial networks for modeling chaotic dynamical systems. Journal of Computational Physics. 406, 109209, 2020). In the present work, we enforce deterministic yet imprecise constraints on GANs by incorporating them into the loss function of the generator. We evaluate the performance of physics-constrained GANs on two representative tasks with geometrical constraints (generating points on circles) and differential constraints (generating divergence-free flow velocity fields), respectively. In both cases, the constrained GANs produced samples that conform to the underlying constraints rather accurately, even though the constraints are only enforced up to a specified interval. More importantly, the imposed constraints significantly accelerate the convergence and improve the robustness in the training, indicating that they serve as a physics-based regularization. These improvements are noteworthy, as the convergence and robustness are two well-known obstacles in the training of GANs.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2020-0106}, url = {http://global-sci.org/intro/article_detail/cicp/19306.html} }Generative adversarial networks (GANs) were initially proposed to generate images by learning from a large number of samples. Recently, GANs have been used to emulate complex physical systems such as turbulent flows. However, a critical question must be answered before GANs can be considered trusted emulators for physical systems: do GANs-generated samples conform to the various physical constraints? These include both deterministic constraints (e.g., conservation laws) and statistical constraints (e.g., energy spectrum of turbulent flows). The latter have been studied in a companion paper (Wu et al., Enforcing statistical constraints in generative adversarial networks for modeling chaotic dynamical systems. Journal of Computational Physics. 406, 109209, 2020). In the present work, we enforce deterministic yet imprecise constraints on GANs by incorporating them into the loss function of the generator. We evaluate the performance of physics-constrained GANs on two representative tasks with geometrical constraints (generating points on circles) and differential constraints (generating divergence-free flow velocity fields), respectively. In both cases, the constrained GANs produced samples that conform to the underlying constraints rather accurately, even though the constraints are only enforced up to a specified interval. More importantly, the imposed constraints significantly accelerate the convergence and improve the robustness in the training, indicating that they serve as a physics-based regularization. These improvements are noteworthy, as the convergence and robustness are two well-known obstacles in the training of GANs.