SimpleSVM

S. V.N. Vishwanathan*, Alexander J. Smola, M. Narasimha Murty

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    87 Citations (Scopus)

    Abstract

    We present a fast iterative support vector training algorithm for a large variety of different formulations. It works by incrementally changing a candidate support vector set using a greedy approach, until the supporting hyperplane is found within a finite number of iterations. It is derived from a simple active set method which sweeps through the set of Lagrange multipliers and keeps optimality in the unconstrained variables, while discarding large amounts of bound-constrained variables. The hard-margin version can be viewed as a simple (yet computationally crucial) modification of the incremental SVM training algorithms of Cauwenberghs and Poggio. Experimental results for various settings are reported. In all cases our algorithm is considerably faster than competing methods such as Sequential Minimal Optimization or the Nearest Point Algorithm.

    Original languageEnglish
    Title of host publicationProceedings, Twentieth International Conference on Machine Learning
    EditorsT. Fawcett, N. Mishra
    Pages760-767
    Number of pages8
    Publication statusPublished - 2003
    EventProceedings, Twentieth International Conference on Machine Learning - Washington, DC, United States
    Duration: 21 Aug 200324 Aug 2003

    Publication series

    NameProceedings, Twentieth International Conference on Machine Learning
    Volume2

    Conference

    ConferenceProceedings, Twentieth International Conference on Machine Learning
    Country/TerritoryUnited States
    CityWashington, DC
    Period21/08/0324/08/03

    Fingerprint

    Dive into the research topics of 'SimpleSVM'. Together they form a unique fingerprint.

    Cite this