Volume 37, Issue 5
Convergence Rate of Gradient Descent Method for Multi-Objective Optimization

J. Comp. Math., 37 (2019), pp. 689-703.

Published online: 2019-03

Preview Full PDF 105 2216
Export citation

Cited by

• Abstract

The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization.

• Keywords

Multi-objective optimization, Gradient descent, Convergence rate.

90C29, 90C52

zengly@lsec.cc.ac.cn (Liaoyuan Zeng)

dyh@lsec.cc.ac.cn (Yuhong Dai)

huangyakui2006@gmail.com (Yakui Huang)

• BibTex
• RIS
• TXT
@Article{JCM-37-689, author = {Zeng , Liaoyuan and Dai , Yuhong and Huang , Yakui }, title = {Convergence Rate of Gradient Descent Method for Multi-Objective Optimization}, journal = {Journal of Computational Mathematics}, year = {2019}, volume = {37}, number = {5}, pages = {689--703}, abstract = {

The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization.

}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.1808-m2017-0214}, url = {http://global-sci.org/intro/article_detail/jcm/13041.html} }
TY - JOUR T1 - Convergence Rate of Gradient Descent Method for Multi-Objective Optimization AU - Zeng , Liaoyuan AU - Dai , Yuhong AU - Huang , Yakui JO - Journal of Computational Mathematics VL - 5 SP - 689 EP - 703 PY - 2019 DA - 2019/03 SN - 37 DO - http://dor.org/10.4208/jcm.1808-m2017-0214 UR - https://global-sci.org/intro/jcm/13041.html KW - Multi-objective optimization, Gradient descent, Convergence rate. AB -

The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization.

Liaoyuan Zeng, Yuhong Dai & Yakui Huang. (2019). Convergence Rate of Gradient Descent Method for Multi-Objective Optimization. Journal of Computational Mathematics. 37 (5). 689-703. doi:10.4208/jcm.1808-m2017-0214
Copy to clipboard
The citation has been copied to your clipboard