Convergence of the iterates of descent methods for analytic cost functions

P. A. Absil*, R. Mahony, B. Andrews

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    203 Citations (Scopus)

    Abstract

    In the early eighties Lojasiewicz [in Seminari di Geometric, 1982-1983, Università di Bologna, Istituto di Geometria, Dipartimento di Matematica, 1984, pp. 115-117] proved that a bounded solution of a gradient flow for an analytic cost function converges to a well-defined limit point. In this paper, we show that the iterates of numerical descent algorithms, for an analytic cost function, share this convergence property if they satisfy certain natural descent conditions. The results obtained are applicable to a broad class of optimization schemes and strengthen classical "weak convergence" results for descent methods to "strong limit-point convergence" for a large class of cost functions of practical interest. The result does not require that the cost has isolated critical points and requires no assumptions on the convexity of the cost nor any nondegeneracy conditions on the Hessian of the cost at critical points.

    Original languageEnglish
    Pages (from-to)531-547
    Number of pages17
    JournalSIAM Journal on Optimization
    Volume16
    Issue number2
    DOIs
    Publication statusPublished - 2006

    Fingerprint

    Dive into the research topics of 'Convergence of the iterates of descent methods for analytic cost functions'. Together they form a unique fingerprint.

    Cite this