TY - JOUR T1 - Enforcing Imprecise Constraints on Generative Adversarial Networks for Emulating Physical Systems AU - Zeng , Yang AU - Wu , Jin-Long AU - Xiao , Heng JO - Communications in Computational Physics VL - 3 SP - 635 EP - 665 PY - 2021 DA - 2021/07 SN - 30 DO - http://doi.org/10.4208/cicp.OA-2020-0106 UR - https://global-sci.org/intro/article_detail/cicp/19306.html KW - Generative adversarial networks, physics constraints, physics-informed machine learning. AB -
Generative adversarial networks (GANs) were initially proposed to generate images by learning from a large number of samples. Recently, GANs have been used to emulate complex physical systems such as turbulent flows. However, a critical question must be answered before GANs can be considered trusted emulators for physical systems: do GANs-generated samples conform to the various physical constraints? These include both deterministic constraints (e.g., conservation laws) and statistical constraints (e.g., energy spectrum of turbulent flows). The latter have been studied in a companion paper (Wu et al., Enforcing statistical constraints in generative adversarial networks for modeling chaotic dynamical systems. Journal of Computational Physics. 406, 109209, 2020). In the present work, we enforce deterministic yet imprecise constraints on GANs by incorporating them into the loss function of the generator. We evaluate the performance of physics-constrained GANs on two representative tasks with geometrical constraints (generating points on circles) and differential constraints (generating divergence-free flow velocity fields), respectively. In both cases, the constrained GANs produced samples that conform to the underlying constraints rather accurately, even though the constraints are only enforced up to a specified interval. More importantly, the imposed constraints significantly accelerate the convergence and improve the robustness in the training, indicating that they serve as a physics-based regularization. These improvements are noteworthy, as the convergence and robustness are two well-known obstacles in the training of GANs.