- Journal Home
- Volume 43 - 2025
- Volume 42 - 2024
- Volume 41 - 2023
- Volume 40 - 2022
- Volume 39 - 2021
- Volume 38 - 2020
- Volume 37 - 2019
- Volume 36 - 2018
- Volume 35 - 2017
- Volume 34 - 2016
- Volume 33 - 2015
- Volume 32 - 2014
- Volume 31 - 2013
- Volume 30 - 2012
- Volume 29 - 2011
- Volume 28 - 2010
- Volume 27 - 2009
- Volume 26 - 2008
- Volume 25 - 2007
- Volume 24 - 2006
- Volume 23 - 2005
- Volume 22 - 2004
- Volume 21 - 2003
- Volume 20 - 2002
- Volume 19 - 2001
- Volume 18 - 2000
- Volume 17 - 1999
- Volume 16 - 1998
- Volume 15 - 1997
- Volume 14 - 1996
- Volume 13 - 1995
- Volume 12 - 1994
- Volume 11 - 1993
- Volume 10 - 1992
- Volume 9 - 1991
- Volume 8 - 1990
- Volume 7 - 1989
- Volume 6 - 1988
- Volume 5 - 1987
- Volume 4 - 1986
- Volume 3 - 1985
- Volume 2 - 1984
- Volume 1 - 1983
Cited by
- BibTex
- RIS
- TXT
An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable for handling sparse data structure and possesses Q-quadratic convergence rate. The global convergence of this new method is proved, the convergence rate is further analysed, and the detailed implementation is discussed in this paper. Some numerical tests for solving truss optimization and large sparse problems are reported. The theoretical and numerical results show that the new method is efficient for solving large-scale sparse NLP problems.
}, issn = {1991-7139}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/jcm/8986.html} }An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable for handling sparse data structure and possesses Q-quadratic convergence rate. The global convergence of this new method is proved, the convergence rate is further analysed, and the detailed implementation is discussed in this paper. Some numerical tests for solving truss optimization and large sparse problems are reported. The theoretical and numerical results show that the new method is efficient for solving large-scale sparse NLP problems.