Numer. Math. Theor. Meth. Appl., 3 (2010), pp. 79-96.
Published online: 2010-03
Cited by
- BibTex
- RIS
- TXT
The state equations of stochastic control problems, which are controlled stochastic differential equations, are proposed to be discretized by the weak midpoint rule and predictor-corrector methods for the Markov chain approximation approach. Local consistency of the methods are proved. Numerical tests on a simplified Merton's portfolio model show better simulation to feedback control rules by these two methods, as compared with the weak Euler-Maruyama discretisation used by Krawczyk. This suggests a new approach of improving accuracy of approximating Markov chains for stochastic control problems.
}, issn = {2079-7338}, doi = {https://doi.org/10.4208/nmtma.2009.m99006}, url = {http://global-sci.org/intro/article_detail/nmtma/5990.html} }The state equations of stochastic control problems, which are controlled stochastic differential equations, are proposed to be discretized by the weak midpoint rule and predictor-corrector methods for the Markov chain approximation approach. Local consistency of the methods are proved. Numerical tests on a simplified Merton's portfolio model show better simulation to feedback control rules by these two methods, as compared with the weak Euler-Maruyama discretisation used by Krawczyk. This suggests a new approach of improving accuracy of approximating Markov chains for stochastic control problems.