Semantic labeling for prosthetic vision

Lachlan Horne*, Jose Alvarez, Chris McCarthy, Mathieu Salzmann, Nick Barnes

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    26 Citations (Scopus)

    Abstract

    Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from limited resolution and dynamic range of induced visual percepts. This can make navigating complex environments difficult for users. We introduce semantic labeling as a technique to improve navigation outcomes for prosthetic vision users. We produce a novel egocentric vision dataset to demonstrate how semantic labeling can be applied to this problem. We also improve the speed of semantic labeling with sparse computation of unary potentials, enabling its use in real-time wearable assistive devices. We use simulated prosthetic vision to demonstrate the results of our technique. Our approach allows a prosthetic vision system to selectively highlight specific classes of objects in the user's field of view, improving the user's situational awareness.

    Original languageEnglish
    Pages (from-to)113-125
    Number of pages13
    JournalComputer Vision and Image Understanding
    Volume149
    DOIs
    Publication statusPublished - 1 Aug 2016

    Fingerprint

    Dive into the research topics of 'Semantic labeling for prosthetic vision'. Together they form a unique fingerprint.

    Cite this