Deep learning algorithms have been successfully applied to numerically
solve linear Kolmogorov partial differential equations (PDEs). A recent research
shows that if the initial functions are bounded, the empirical risk minimization
(ERM) over clipped ReLU networks generalizes well for solving the linear Kolmogorov PDE. In this paper, we propose to use a truncation technique to extend the generalization results for polynomially growing initial functions. Specifically, we prove
that under an assumption, the sample size required to achieve an generalization error within $ε$ with a confidence level $\varrho$ grows polynomially in the size of the clipped
neural networks and $(ε^{−1},\varrho^{−1}),$ which means that the curse of dimensionality is
broken. Moreover, we verify that the required assumptions hold for Black-Scholes
PDEs and heat equations which are two important cases of linear Kolmogorov PDEs.
For the approximation error, under certain assumptions, we establish approximation results for clipped ReLU neural networks when approximating the solution of
Kolmogorov PDEs. Consequently, we establish that the ERM over artificial neural
networks indeed overcomes the curse of dimensionality for a larger class of linear
Kolmogorov PDEs.