TY - GEN
T1 - Rademacher observations, private data, and boosting
AU - Nock, Richard
AU - Patrini, Giorgio
AU - Friedman, Arik
N1 - Publisher Copyright:
Copyright © 2015 by the author(s).
PY - 2015
Y1 - 2015
N2 - The minimization of the logistic loss is a popular approach to batch supervised learning. Our paper starts from the surprising observation that, when fitting linear classifiers, the minimization of the logistic loss is equivalent to the minimization of an exponential rado-loss computed (i) over transformed data that we call Rademacher observations (rados), and (ii) over the same classifier as the one of the logistic loss. Thus, a classifier learnt from rados can be directly used to classify observations. We provide a learning algorithm over rados with boosting-compliant convergence rates on the logistic loss (computed over examples). Experiments on domains with up to millions of examples, backed up by theoretical arguments, display that learning over a small set of random rados can challenge the state of the art that learns over the complete set of examples. We show that rados comply with various privacy requirements that make them good candidates for machine learning in a privacy framework. We give several algebraic, geometric and computational hardness results on reconstructing examples from rados. We also show how it is possible to craft, and efficiently learn from, rados in a differential privacy framework. Tests reveal that learning from differentially private rados brings non-trivial privacy vs accuracy tradeoffs.
AB - The minimization of the logistic loss is a popular approach to batch supervised learning. Our paper starts from the surprising observation that, when fitting linear classifiers, the minimization of the logistic loss is equivalent to the minimization of an exponential rado-loss computed (i) over transformed data that we call Rademacher observations (rados), and (ii) over the same classifier as the one of the logistic loss. Thus, a classifier learnt from rados can be directly used to classify observations. We provide a learning algorithm over rados with boosting-compliant convergence rates on the logistic loss (computed over examples). Experiments on domains with up to millions of examples, backed up by theoretical arguments, display that learning over a small set of random rados can challenge the state of the art that learns over the complete set of examples. We show that rados comply with various privacy requirements that make them good candidates for machine learning in a privacy framework. We give several algebraic, geometric and computational hardness results on reconstructing examples from rados. We also show how it is possible to craft, and efficiently learn from, rados in a differential privacy framework. Tests reveal that learning from differentially private rados brings non-trivial privacy vs accuracy tradeoffs.
UR - http://www.scopus.com/inward/record.url?scp=84969766526&partnerID=8YFLogxK
M3 - Conference contribution
T3 - 32nd International Conference on Machine Learning, ICML 2015
SP - 948
EP - 956
BT - 32nd International Conference on Machine Learning, ICML 2015
A2 - Blei, David
A2 - Bach, Francis
PB - International Machine Learning Society (IMLS)
T2 - 32nd International Conference on Machine Learning, ICML 2015
Y2 - 6 July 2015 through 11 July 2015
ER -