Attention-Based Real Image Restoration

Saeed Anwar, Nick Barnes, Lars Petersson

    Research output: Contribution to journalArticlepeer-review

    19 Citations (Scopus)

    Abstract

    Deep convolutional neural networks perform better on images containing spatially invariant degradations, also known as synthetic degradations; however, their performance is limited on real-degraded photographs and requires multiple-stage network modeling. To advance the practicability of restoration algorithms, this article proposes a novel single-stage blind real image restoration network (R2Net) by employing a modular architecture. We use a residual on the residual structure to ease low-frequency information flow and apply feature attention to exploit the channel dependencies. Furthermore, the evaluation in terms of quantitative metrics and visual quality for four restoration tasks, i.e., denoising, super-resolution, raindrop removal, and JPEG compression on 11 real degraded datasets against more than 30 state-of-the-art algorithms, demonstrates the superiority of our R2Net. We also present the comparison on three synthetically generated degraded datasets for denoising to showcase our method’s capability on synthetics denoising. The codes, trained models, and results are available on https://github.com/saeedanwar/R2Net.

    Original languageEnglish
    Pages (from-to)3954-3964
    Number of pages11
    JournalIEEE Transactions on Neural Networks and Learning Systems
    Volume36
    Issue number3
    DOIs
    Publication statusPublished - 2021

    Fingerprint

    Dive into the research topics of 'Attention-Based Real Image Restoration'. Together they form a unique fingerprint.

    Cite this