Volume 2, Issue 3
Reinforcement Learning with Function Approximation: From Linear to Nonlinear Featured Review

Jihao Long & Jiequn Han

J. Mach. Learn. , 2 (2023), pp. 161-193.

Published online: 2023-09

[An open-access article; the PDF is free to any online user.]

Export citation
  • Abstract

Function approximation has been an indispensable component in modern reinforcement learning algorithms designed to tackle problems with large state spaces in high dimensions. This paper reviews recent results on error analysis for these reinforcement learning algorithms in linear or nonlinear approximation settings, emphasizing approximation error and estimation error/sample complexity. We discuss various properties related to approximation error and present concrete conditions on transition probability and reward function under which these properties hold true. Sample complexity analysis in reinforcement learning is more complicated than in supervised learning, primarily due to the distribution mismatch phenomenon. With assumptions on the linear structure of the problem, numerous algorithms in the literature achieve polynomial sample complexity with respect to the number of features, episode length, and accuracy, although the minimax rate has not been achieved yet. These results rely on the $L^∞$ and UCB estimation of estimation error, which can handle the distribution mismatch phenomenon. The problem and analysis become substantially more challenging in the setting of nonlinear function approximation, as both $L^∞$ and UCB estimation are inadequate for bounding the error with a favorable rate in high dimensions. We discuss additional assumption necessary to address the distribution mismatch and derive meaningful results for nonlinear RL problems.

  • AMS Subject Headings

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{JML-2-161, author = {Long , Jihao and Han , Jiequn}, title = {Reinforcement Learning with Function Approximation: From Linear to Nonlinear}, journal = {Journal of Machine Learning}, year = {2023}, volume = {2}, number = {3}, pages = {161--193}, abstract = {

Function approximation has been an indispensable component in modern reinforcement learning algorithms designed to tackle problems with large state spaces in high dimensions. This paper reviews recent results on error analysis for these reinforcement learning algorithms in linear or nonlinear approximation settings, emphasizing approximation error and estimation error/sample complexity. We discuss various properties related to approximation error and present concrete conditions on transition probability and reward function under which these properties hold true. Sample complexity analysis in reinforcement learning is more complicated than in supervised learning, primarily due to the distribution mismatch phenomenon. With assumptions on the linear structure of the problem, numerous algorithms in the literature achieve polynomial sample complexity with respect to the number of features, episode length, and accuracy, although the minimax rate has not been achieved yet. These results rely on the $L^∞$ and UCB estimation of estimation error, which can handle the distribution mismatch phenomenon. The problem and analysis become substantially more challenging in the setting of nonlinear function approximation, as both $L^∞$ and UCB estimation are inadequate for bounding the error with a favorable rate in high dimensions. We discuss additional assumption necessary to address the distribution mismatch and derive meaningful results for nonlinear RL problems.

}, issn = {2790-2048}, doi = {https://doi.org/10.4208/jml.230105}, url = {http://global-sci.org/intro/article_detail/jml/22011.html} }
TY - JOUR T1 - Reinforcement Learning with Function Approximation: From Linear to Nonlinear AU - Long , Jihao AU - Han , Jiequn JO - Journal of Machine Learning VL - 3 SP - 161 EP - 193 PY - 2023 DA - 2023/09 SN - 2 DO - http://doi.org/10.4208/jml.230105 UR - https://global-sci.org/intro/article_detail/jml/22011.html KW - Reinforcement Learning, Function Approximation, High-Dimensionality Analysis, Distribution Mismatch. AB -

Function approximation has been an indispensable component in modern reinforcement learning algorithms designed to tackle problems with large state spaces in high dimensions. This paper reviews recent results on error analysis for these reinforcement learning algorithms in linear or nonlinear approximation settings, emphasizing approximation error and estimation error/sample complexity. We discuss various properties related to approximation error and present concrete conditions on transition probability and reward function under which these properties hold true. Sample complexity analysis in reinforcement learning is more complicated than in supervised learning, primarily due to the distribution mismatch phenomenon. With assumptions on the linear structure of the problem, numerous algorithms in the literature achieve polynomial sample complexity with respect to the number of features, episode length, and accuracy, although the minimax rate has not been achieved yet. These results rely on the $L^∞$ and UCB estimation of estimation error, which can handle the distribution mismatch phenomenon. The problem and analysis become substantially more challenging in the setting of nonlinear function approximation, as both $L^∞$ and UCB estimation are inadequate for bounding the error with a favorable rate in high dimensions. We discuss additional assumption necessary to address the distribution mismatch and derive meaningful results for nonlinear RL problems.

Long , Jihao and Han , Jiequn. (2023). Reinforcement Learning with Function Approximation: From Linear to Nonlinear. Journal of Machine Learning. 2 (3). 161-193. doi:10.4208/jml.230105
Copy to clipboard
The citation has been copied to your clipboard