TY - JOUR
T1 - No fuss metric learning, a Hilbert space scenario
AU - Faraki, Masoud
AU - T. Harandi, Mehrtash
AU - Porikli, Fatih
N1 - Publisher Copyright:
© 2017
PY - 2017/10/15
Y1 - 2017/10/15
N2 - In this paper, we devise a kernel version of the recently introduced keep it simple and straightforward metric learning method, hence adding a novel dimension to its applicability in scenarios where input data is non-linearly distributed. To this end, we make use of the infinite dimensional covariance matrices and show how a matrix in a reproducing kernel Hilbert space can be projected onto the positive cone efficiently. In particular, we propose two techniques towards projecting on the positive cone in a reproducing kernel Hilbert space. The first method, though approximating the solution, enjoys a closed-form and analytic formulation. The second solution is more accurate and requires Riemannian optimization techniques. Nevertheless, both solutions can scale up very well as our empirical evaluations suggest. For the sake of completeness, we also employ the Nyström method to approximate a reproducing kernel Hilbert space before learning a metric. Our experiments evidence that, compared to the state-of-the-art metric learning algorithms, working directly in reproducing kernel Hilbert space, leads to more robust and better performances.
AB - In this paper, we devise a kernel version of the recently introduced keep it simple and straightforward metric learning method, hence adding a novel dimension to its applicability in scenarios where input data is non-linearly distributed. To this end, we make use of the infinite dimensional covariance matrices and show how a matrix in a reproducing kernel Hilbert space can be projected onto the positive cone efficiently. In particular, we propose two techniques towards projecting on the positive cone in a reproducing kernel Hilbert space. The first method, though approximating the solution, enjoys a closed-form and analytic formulation. The second solution is more accurate and requires Riemannian optimization techniques. Nevertheless, both solutions can scale up very well as our empirical evaluations suggest. For the sake of completeness, we also employ the Nyström method to approximate a reproducing kernel Hilbert space before learning a metric. Our experiments evidence that, compared to the state-of-the-art metric learning algorithms, working directly in reproducing kernel Hilbert space, leads to more robust and better performances.
KW - Mahalanobis metric learning
KW - Reproducing kernel Hilbert space
KW - Symmetric positive definite matrix
UR - http://www.scopus.com/inward/record.url?scp=85029188153&partnerID=8YFLogxK
U2 - 10.1016/j.patrec.2017.09.017
DO - 10.1016/j.patrec.2017.09.017
M3 - Article
SN - 0167-8655
VL - 98
SP - 83
EP - 89
JO - Pattern Recognition Letters
JF - Pattern Recognition Letters
ER -