Learning with symmetric label noise: The importance of being unhinged

Brendan Van Rooyen, Aditya Krishna Menony, Robert C. Williamson

    Research output: Contribution to journalConference articlepeer-review

    212 Citations (Scopus)

    Abstract

    Convex potential minimisation is the de facto approach to binary classification. However, Long and Servedio [2010] proved that under symmetric label noise (SLN), minimisation of any convex potential over a linear function class can result in classification performance equivalent to random guessing. This ostensibly shows that convex losses are not SLN-robust. In this paper, we propose a convex, classification-calibrated loss and prove that it is SLN-robust. The loss avoids the Long and Servedio [2010] result by virtue of being negatively unbounded. The loss is a modification of the hinge loss, where one does not clamp at zero; hence, we call it the unhinged loss. We show that the optimal unhinged solution is equivalent to that of a strongly regularised SVM, and is the limiting solution for any convex potential; this implies that strong ℓ2 regularisation makes most standard learners SLN-robust. Experiments confirm the unhinged loss' SLN-robustness is borne out in practice. So, with apologies to Wilde [1895], while the truth is rarely pure, it can be simple.

    Original languageEnglish
    Pages (from-to)10-18
    Number of pages9
    JournalAdvances in Neural Information Processing Systems
    Volume2015-January
    Publication statusPublished - 2015
    Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
    Duration: 7 Dec 201512 Dec 2015

    Fingerprint

    Dive into the research topics of 'Learning with symmetric label noise: The importance of being unhinged'. Together they form a unique fingerprint.

    Cite this