The geometry of losses

Robert C. Williamson*

*Corresponding author for this work

    Research output: Contribution to journalConference articlepeer-review

    7 Citations (Scopus)

    Abstract

    Loss functions are central to machine learning because they are the means by which the quality of a prediction is evaluated. Any loss that is not proper, or can not be transformed to be proper via a link function is inadmissible. All admissible losses for n-class problems can be obtained in terms of a convex body in ℝn. We show this explicitly and show how some existing results simplify when viewed from this perspective. This allows the development of a rich algebra of losses induced by binary operations on convex bodies (that return a convex body). Furthermore it allows us to define an "inverse loss" which provides a universal "substitution function" for the Aggregating Algorithm. In doing so we show a formal connection between proper losses and norms.

    Original languageEnglish
    Pages (from-to)1078-1108
    Number of pages31
    JournalJournal of Machine Learning Research
    Volume35
    Publication statusPublished - 2014
    Event27th Conference on Learning Theory, COLT 2014 - Barcelona, Spain
    Duration: 13 Jun 201415 Jun 2014

    Fingerprint

    Dive into the research topics of 'The geometry of losses'. Together they form a unique fingerprint.

    Cite this