Convex learning with invariances

Choon-Hui Teo, Amir Globerson, Sam Roweis, Alexander Smola

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Abstract

    Incorporating invariances into a learning algorithm is a common problem in machine learning. We provide a convex formulation which can deal with arbitrary loss functions and arbitrary losses. In addition, it is a drop-in replacement for most optimization algorithms for kernels, including solvers of the SVMStruct family. The advantage of our setting is that it relies on column generation instead of modifying the underlying optimization problem directly.
    Original languageEnglish
    Title of host publicationAdvances in Neural Information Processing Systems 20: Proceedings of the 2007 Conference
    EditorsPlatt, John C., Koller, Daphne, Singer, Yoram and Roweis, Sam
    Place of PublicationVancouver Canada
    PublisherMIT Press
    Pages1489-1496
    EditionPeer Reviewed
    ISBN (Print)9781605603520
    Publication statusPublished - 2009
    EventConference on Advances in Neural Information Processing Systems (NIPS 2007) - Vancouver Canada
    Duration: 1 Jan 2009 → …
    http://books.nips.cc/nips20.html

    Conference

    ConferenceConference on Advances in Neural Information Processing Systems (NIPS 2007)
    Period1/01/09 → …
    OtherDecember 3-6 2007
    Internet address

    Fingerprint

    Dive into the research topics of 'Convex learning with invariances'. Together they form a unique fingerprint.

    Cite this