A stochastic quasi-Newton method for online convex optimization

Nicol N. Schraudolph*, Jin Yu, Simon Günter

*Corresponding author for this work

    Research output: Contribution to journalConference articlepeer-review

    198 Citations (Scopus)

    Abstract

    We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperforms previous stochastic gradient methods for parameter estimation in conditional random fields. We are working on analyzing the convergence of online (L)BFGS, and extending it to nonconvex optimization problems.

    Original languageEnglish
    Pages (from-to)436-443
    Number of pages8
    JournalJournal of Machine Learning Research
    Volume2
    Publication statusPublished - 2007
    Event11th International Conference on Artificial Intelligence and Statistics, AISTATS 2007 - San Juan, Puerto Rico
    Duration: 21 Mar 200724 Mar 2007

    Fingerprint

    Dive into the research topics of 'A stochastic quasi-Newton method for online convex optimization'. Together they form a unique fingerprint.

    Cite this