Finito: A faster, permutable incremental gradient method for big data problems

Aaron J. Defazio, Tibério S. Caetano, Justin Domke

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    38 Citations (Scopus)

    Abstract

    Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than ex-isting methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the art performance.

    Original languageEnglish
    Title of host publication31st International Conference on Machine Learning, ICML 2014
    PublisherInternational Machine Learning Society (IMLS)
    Pages2839-2855
    Number of pages17
    ISBN (Electronic)9781634393973
    Publication statusPublished - 2014
    Event31st International Conference on Machine Learning, ICML 2014 - Beijing, China
    Duration: 21 Jun 201426 Jun 2014

    Publication series

    Name31st International Conference on Machine Learning, ICML 2014
    Volume4

    Conference

    Conference31st International Conference on Machine Learning, ICML 2014
    Country/TerritoryChina
    CityBeijing
    Period21/06/1426/06/14

    Fingerprint

    Dive into the research topics of 'Finito: A faster, permutable incremental gradient method for big data problems'. Together they form a unique fingerprint.

    Cite this