Accelerated training of conditional random fields with stochastic gradient methods

S. V.N. Vishwanathan*, Nicol N. Schraudolph, Mark W. Schmidt, Kevin P. Murphy

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    148 Citations (Scopus)

    Abstract

    We apply Stochastic Meta-Descent (SMD), a stochastic gradient optimization method with gain vector adaptation, to the training of Conditional Random Fields (CRFs). On several large data sets, the resulting optimizer converges to the same quality of solution over an order of magnitude faster than limited-memory BFGS, the leading method reported to date. We report results for both exact and inexact inference techniques.

    Original languageEnglish
    Title of host publicationICML 2006 - Proceedings of the 23rd International Conference on Machine Learning
    Pages969-976
    Number of pages8
    Publication statusPublished - 2006
    EventICML 2006: 23rd International Conference on Machine Learning - Pittsburgh, PA, United States
    Duration: 25 Jun 200629 Jun 2006

    Publication series

    NameICML 2006 - Proceedings of the 23rd International Conference on Machine Learning
    Volume2006

    Conference

    ConferenceICML 2006: 23rd International Conference on Machine Learning
    Country/TerritoryUnited States
    CityPittsburgh, PA
    Period25/06/0629/06/06

    Fingerprint

    Dive into the research topics of 'Accelerated training of conditional random fields with stochastic gradient methods'. Together they form a unique fingerprint.

    Cite this