Deep subspace clustering networks

Pan Ji, Tong Zhang, Hongdong Li, Mathieu Salzmann, Ian Reid

    Research output: Contribution to journalConference articlepeer-review

    498 Citations (Scopus)

    Abstract

    We present a novel deep neural network architecture for unsupervised subspace clustering. This architecture is built upon deep auto-encoders, which non-linearly map the input data into a latent space. Our key idea is to introduce a novel self-expressive layer between the encoder and the decoder to mimic the "self-expressiveness" property that has proven effective in traditional subspace clustering. Being differentiable, our new self-expressive layer provides a simple but effective way to learn pairwise affinities between all data points through a standard back-propagation procedure. Being nonlinear, our neural-network based method is able to cluster data points having complex (often nonlinear) structures. We further propose pre-training and fine-tuning strategies that let us effectively learn the parameters of our subspace clustering networks. Our experiments show that our method significantly outperforms the state-of-the-art unsupervised subspace clustering techniques.

    Original languageEnglish
    Pages (from-to)24-33
    Number of pages10
    JournalAdvances in Neural Information Processing Systems
    Volume2017-December
    Publication statusPublished - 2017
    Event31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
    Duration: 4 Dec 20179 Dec 2017

    Fingerprint

    Dive into the research topics of 'Deep subspace clustering networks'. Together they form a unique fingerprint.

    Cite this