No fuss metric learning, a Hilbert space scenario

Masoud Faraki*, Mehrtash T. Harandi, Fatih Porikli

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    13 Citations (Scopus)

    Abstract

    In this paper, we devise a kernel version of the recently introduced keep it simple and straightforward metric learning method, hence adding a novel dimension to its applicability in scenarios where input data is non-linearly distributed. To this end, we make use of the infinite dimensional covariance matrices and show how a matrix in a reproducing kernel Hilbert space can be projected onto the positive cone efficiently. In particular, we propose two techniques towards projecting on the positive cone in a reproducing kernel Hilbert space. The first method, though approximating the solution, enjoys a closed-form and analytic formulation. The second solution is more accurate and requires Riemannian optimization techniques. Nevertheless, both solutions can scale up very well as our empirical evaluations suggest. For the sake of completeness, we also employ the Nyström method to approximate a reproducing kernel Hilbert space before learning a metric. Our experiments evidence that, compared to the state-of-the-art metric learning algorithms, working directly in reproducing kernel Hilbert space, leads to more robust and better performances.

    Original languageEnglish
    Pages (from-to)83-89
    Number of pages7
    JournalPattern Recognition Letters
    Volume98
    DOIs
    Publication statusPublished - 15 Oct 2017

    Fingerprint

    Dive into the research topics of 'No fuss metric learning, a Hilbert space scenario'. Together they form a unique fingerprint.

    Cite this