TY - JOUR T1 - Solving the $k$-Sparse Eigenvalue Problem with Reinforcement Learning AU - Zhou , Li AU - Lihao Yan , AU - Mark A. Caprio , AU - Weiguo Gao , AU - Yang , Chao JO - CSIAM Transactions on Applied Mathematics VL - 4 SP - 697 EP - 723 PY - 2021 DA - 2021/11 SN - 2 DO - http://doi.org/10.4208/csiam-am.2020-0220 UR - https://global-sci.org/intro/article_detail/csiam-am/19989.html KW - Large-scale eigenvalue problem, quantum many-body problem, eigenvector localization, reinforcement learning, approximate $Q$-learning, stochastic sampling, high performance computing. AB -
We examine the possibility of using a reinforcement learning (RL) algorithm to solve large-scale eigenvalue problems in which the desired the eigenvector can be approximated by a sparse vector with at most $k$ nonzero elements, where $k$ is relatively small compare to the dimension of the matrix to be partially diagonalized. This type of problem arises in applications in which the desired eigenvector exhibits localization properties and in large-scale eigenvalue computations in which the amount of computational resource is limited. When the positions of these nonzero elements can be determined, we can obtain the $k$-sparse approximation to the original problem by computing the eigenvalue of a $k×k$ submatrix extracted from $k$ rows and columns of the original matrix. We review a previously developed greedy algorithm for incrementally probing the positions of the nonzero elements in a $k$-sparse approximate eigenvector and show that the greedy algorithm can be improved by using an RL method to refine the selection of $k$ rows and columns of the original matrix. We describe how to represent states, actions, rewards and policies in an RL algorithm designed to solve the $k$-sparse eigenvalue problem and demonstrate the effectiveness of the RL algorithm on two examples originating from quantum many-body physics.