Volume 3, Issue 2
An $L^2$ Analysis of Reinforcement Learning in High Dimensions with Kernel and Neural Network Approximation

Jihao Long, Jiequn Han & Weinan E

CSIAM Trans. Appl. Math., 3 (2022), pp. 191-220.

Published online: 2022-05

Export citation
  • Abstract

Reinforcement learning (RL) algorithms based on high-dimensional function approximation have achieved tremendous empirical success in large-scale problems with an enormous number of states. However, most analysis of such algorithms gives rise to error bounds that involve either the number of states or the number of features. This paper considers the situation where the function approximation is made either using the kernel method or the two-layer neural network model, in the context of a fitted Q-iteration algorithm with explicit regularization. We establish an $\tilde{O}(H^3|\mathcal{A}|^{\frac{1}{4}} n^{-\frac{1}{4}})$ bound for the optimal policy with $Hn$ samples, where $H$ is the length of each episode and $|\mathcal{A}|$ is the size of action space. Our analysis hinges on analyzing the $L^2$ error of the approximated Q-function using $n$ data points. Even though this result still requires a finite-sized action space, the error bound is independent of the dimensionality of the state space.

  • AMS Subject Headings

68Q25, 62R07, 68T07, 93C55, 93C57

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{CSIAM-AM-3-191, author = {Long , JihaoHan , Jiequn and E , Weinan}, title = {An $L^2$ Analysis of Reinforcement Learning in High Dimensions with Kernel and Neural Network Approximation}, journal = {CSIAM Transactions on Applied Mathematics}, year = {2022}, volume = {3}, number = {2}, pages = {191--220}, abstract = {

Reinforcement learning (RL) algorithms based on high-dimensional function approximation have achieved tremendous empirical success in large-scale problems with an enormous number of states. However, most analysis of such algorithms gives rise to error bounds that involve either the number of states or the number of features. This paper considers the situation where the function approximation is made either using the kernel method or the two-layer neural network model, in the context of a fitted Q-iteration algorithm with explicit regularization. We establish an $\tilde{O}(H^3|\mathcal{A}|^{\frac{1}{4}} n^{-\frac{1}{4}})$ bound for the optimal policy with $Hn$ samples, where $H$ is the length of each episode and $|\mathcal{A}|$ is the size of action space. Our analysis hinges on analyzing the $L^2$ error of the approximated Q-function using $n$ data points. Even though this result still requires a finite-sized action space, the error bound is independent of the dimensionality of the state space.

}, issn = {2708-0579}, doi = {https://doi.org/10.4208/csiam-am.SO-2021-0026}, url = {http://global-sci.org/intro/article_detail/csiam-am/20535.html} }
TY - JOUR T1 - An $L^2$ Analysis of Reinforcement Learning in High Dimensions with Kernel and Neural Network Approximation AU - Long , Jihao AU - Han , Jiequn AU - E , Weinan JO - CSIAM Transactions on Applied Mathematics VL - 2 SP - 191 EP - 220 PY - 2022 DA - 2022/05 SN - 3 DO - http://doi.org/10.4208/csiam-am.SO-2021-0026 UR - https://global-sci.org/intro/article_detail/csiam-am/20535.html KW - Reinforcement learning, function approximation, neural networks, reproducing kernel Hilbert space. AB -

Reinforcement learning (RL) algorithms based on high-dimensional function approximation have achieved tremendous empirical success in large-scale problems with an enormous number of states. However, most analysis of such algorithms gives rise to error bounds that involve either the number of states or the number of features. This paper considers the situation where the function approximation is made either using the kernel method or the two-layer neural network model, in the context of a fitted Q-iteration algorithm with explicit regularization. We establish an $\tilde{O}(H^3|\mathcal{A}|^{\frac{1}{4}} n^{-\frac{1}{4}})$ bound for the optimal policy with $Hn$ samples, where $H$ is the length of each episode and $|\mathcal{A}|$ is the size of action space. Our analysis hinges on analyzing the $L^2$ error of the approximated Q-function using $n$ data points. Even though this result still requires a finite-sized action space, the error bound is independent of the dimensionality of the state space.

Long , JihaoHan , Jiequn and E , Weinan. (2022). An $L^2$ Analysis of Reinforcement Learning in High Dimensions with Kernel and Neural Network Approximation. CSIAM Transactions on Applied Mathematics. 3 (2). 191-220. doi:10.4208/csiam-am.SO-2021-0026
Copy to clipboard
The citation has been copied to your clipboard