Abstract
We generalise the classical Pinsker inequality which relates variational divergence to Kullback-Liebler divergence in two ways: we consider arbitrary f-divergences in place of KL divergence, and we assume knowledge of a sequence of values of generalised variational divergences. We then develop a best possible inequality for this doubly generalised situation. Specialising our result to the classical case provides a new and tight explicit bound relating KL to variational divergence (solving a problem posed by Vajda some 40 years ago). The solution relies on exploiting a connection between divergences and the Bayes risk of a learning problem via an integral representation.
Original language | English |
---|---|
Publication status | Published - 2009 |
Event | 22nd Conference on Learning Theory, COLT 2009 - Montreal, QC, Canada Duration: 18 Jun 2009 → 21 Jun 2009 |
Conference
Conference | 22nd Conference on Learning Theory, COLT 2009 |
---|---|
Country/Territory | Canada |
City | Montreal, QC |
Period | 18/06/09 → 21/06/09 |