Depth estimation and blur removal from a single out-of-focus image

Saeed Anwar, Zeeshan Hayder, Fatih Porikli

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    29 Citations (Scopus)

    Abstract

    This paper presents a depth estimation method that leverages rich representations learned from cascaded convolutional and fully connected neural networks operating on a patch-pooled set of feature maps. Our method is very fast and it substantially improves depth accuracy over the state-of-the-art alternatives, and from this, we computationally reconstruct an all-focus image and achieve synthetic re-focusing, all from a single image. Our experiments on benchmark datasets such as Make3D and NYU-v2 demonstrate superior performance in comparison to other available depth estimation methods by reducing the root-mean-squared error by 57% & 46%, and blur removal methods by 0.36 dB & 0.72 dB in PSNR, respectively. This improvement is also demonstrated by the superior performance using real defocus images.

    Original languageEnglish
    Title of host publicationBritish Machine Vision Conference 2017, BMVC 2017
    PublisherBMVA Press
    ISBN (Electronic)190172560X, 9781901725605
    DOIs
    Publication statusPublished - 2017
    Event28th British Machine Vision Conference, BMVC 2017 - London, United Kingdom
    Duration: 4 Sept 20177 Sept 2017

    Publication series

    NameBritish Machine Vision Conference 2017, BMVC 2017

    Conference

    Conference28th British Machine Vision Conference, BMVC 2017
    Country/TerritoryUnited Kingdom
    CityLondon
    Period4/09/177/09/17

    Fingerprint

    Dive into the research topics of 'Depth estimation and blur removal from a single out-of-focus image'. Together they form a unique fingerprint.

    Cite this