Divergences and risks for multiclass experiments

Darío García-García, Robert C. Williamson

    Research output: Contribution to journalConference articlepeer-review

    9 Citations (Scopus)

    Abstract

    Csiszár's f-divergence is a way to measure the similarity of two probability distributions. We study the extension of f-divergence to more than two distributions to measure their joint similarity. By exploiting classical results from the comparison of experiments literature we prove the resulting divergence satisfies all the same properties as the traditional binary one. Considering the multidistribution case actually makes the proofs simpler. The key to these results is a formal bridge between these multidistribution f-divergences and Bayes risks for multiclass classification problems.

    Original languageEnglish
    Pages (from-to)28.1-28.20
    JournalJournal of Machine Learning Research
    Volume23
    Publication statusPublished - 2012
    Event25th Annual Conference on Learning Theory, COLT 2012 - Edinburgh, United Kingdom
    Duration: 25 Jun 201227 Jun 2012

    Fingerprint

    Dive into the research topics of 'Divergences and risks for multiclass experiments'. Together they form a unique fingerprint.

    Cite this