Least squares methods in maximum likelihood problems

M. R. Osborne*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    3 Citations (Scopus)

    Abstract

    The Gauss-Newton algorithm for solving nonlinear least squares problems proves particularly efficient for solving parameter estimation problems when the number of independent observations is large and the fitted model is appropriate. In this context the conventional assumption that the residuals are small is not needed. The Gauss-Newton method is a special case of the Fisher scoring algorithm for maximizing log likelihoods and shares with this a number of desirable properties. The formal structural correspondence is striking with the linear subproblem for the general scoring algorithm having the form of a linear least squares problem. This is an important observation because it provides likelihood methods with a computational framework, which accords with computational orthodoxy. Both line search and trust region algorithms are available and these are compared and contrasted here. It is shown that the types of theoretical results that have led to the wide acceptance of trust region methods have direct equivalents in the line search case, while the latter have better transformation invariance properties. Computational experiments for both continuous and discrete distributions show no advantage for the trust region approach.

    Original languageEnglish
    Pages (from-to)943-959
    Number of pages17
    JournalOptimization Methods and Software
    Volume21
    Issue number6
    DOIs
    Publication statusPublished - 1 Dec 2006

    Fingerprint

    Dive into the research topics of 'Least squares methods in maximum likelihood problems'. Together they form a unique fingerprint.

    Cite this