Abstract
Mixability of a loss governs the best possible performance when aggregating expert predictions with respect to that loss. The determination of the mixability constant for binary losses is straightforward but opaque. In the binary case we make this transparent and simpler by characterising mixability in terms of the second derivative of the Bayes risk of proper losses. We then extend this result to multiclass proper losses where there are few existing results. We show that mixability is governed by the Hessian of the Bayes risk, relative to the Hessian of the Bayes risk for log loss. We conclude by comparing our result to other work that bounds prediction performance in terms of the geometry of the Bayes risk. Although all calculations are for proper losses, we also show how to carry the results across to improper losses.
Original language | English |
---|---|
Pages (from-to) | 233-251 |
Number of pages | 19 |
Journal | Journal of Machine Learning Research |
Volume | 19 |
Publication status | Published - 2011 |
Event | 24th International Conference on Learning Theory, COLT 2011 - Budapest, Hungary Duration: 9 Jul 2011 → 11 Jul 2011 |