Underwater scene prior inspired deep underwater image and video enhancement

Chongyi Li*, Saeed Anwar, Fatih Porikli

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    720 Citations (Scopus)

    Abstract

    In underwater scenes, wavelength-dependent light absorption and scattering degrade the visibility of images and videos. The degraded underwater images and videos affect the accuracy of pattern recognition, visual understanding, and key feature extraction in underwater scenes. In this paper, we propose an underwater image enhancement convolutional neural network (CNN) model based on underwater scene prior, called UWCNN. Instead of estimating the parameters of underwater imaging model, the proposed UWCNN model directly reconstructs the clear latent underwater image, which benefits from the underwater scene prior which can be used to synthesize underwater image training data. Besides, based on the light-weight network structure and effective training data, our UWCNN model can be easily extended to underwater videos for frame-by-frame enhancement. Specifically, combining an underwater imaging physical model with optical properties of underwater scenes, we first synthesize underwater image degradation datasets which cover a diverse set of water types and degradation levels. Then, a light-weight CNN model is designed for enhancing each underwater scene type, which is trained by the corresponding training data. At last, this UWCNN model is directly extended to underwater video enhancement. Experiments on real-world and synthetic underwater images and videos demonstrate that our method generalizes well to different underwater scenes.

    Original languageEnglish
    Article number107038
    JournalPattern Recognition
    Volume98
    DOIs
    Publication statusPublished - 1 Feb 2020

    Fingerprint

    Dive into the research topics of 'Underwater scene prior inspired deep underwater image and video enhancement'. Together they form a unique fingerprint.

    Cite this