Riemannian Dictionary Learning and Sparse Coding for Positive Definite Matrices

Anoop Cherian*, Suvrit Sra

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    76 Citations (Scopus)

    Abstract

    Data encoded as symmetric positive definite (SPD) matrices frequently arise in many areas of computer vision and machine learning. While these matrices form an open subset of the Euclidean space of symmetric matrices, viewing them through the lens of non-Euclidean Riemannian (Riem) geometry often turns out to be better suited in capturing several desirable data properties. Inspired by the great success of dictionary learning and sparse coding (DLSC) for vector-valued data, our goal in this paper is to represent data in the form of SPD matrices as sparse conic combinations of SPD atoms from a learned dictionary via a Riem geometric approach. To that end, we formulate a novel Riem optimization objective for DLSC, in which the representation loss is characterized via the affine-invariant Riem metric. We also present a computationally simple algorithm for optimizing our model. Experiments on several computer vision data sets demonstrate superior classification and retrieval performance using our approach when compared with SC via alternative non-Riem formulations.

    Original languageEnglish
    Article number7565529
    Pages (from-to)2859-2871
    Number of pages13
    JournalIEEE Transactions on Neural Networks and Learning Systems
    Volume28
    Issue number12
    DOIs
    Publication statusPublished - Dec 2017

    Fingerprint

    Dive into the research topics of 'Riemannian Dictionary Learning and Sparse Coding for Positive Definite Matrices'. Together they form a unique fingerprint.

    Cite this