Self-Supervised Multiscale Adversarial Regression Network for Stereo Disparity Estimation

Chen Wang, Xiao Bai*, Xiang Wang, Xianglong Liu, Jun Zhou, Xinyu Wu, Hongdong Li, Dacheng Tao

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    30 Citations (Scopus)

    Abstract

    Deep learning approaches have significantly contributed to recent progress in stereo matching. These deep stereo matching methods are usually based on supervised training, which requires a large amount of high-quality ground-truth depth map annotations that are expensive to collect. Furthermore, only a limited quantity of stereo vision training data are currently available, obtained either by active sensors (Lidar and ToF cameras) or through computer graphics simulations and not meeting requirements for deep supervised training. Here, we propose a novel deep stereo approach called the 'self-supervised multiscale adversarial regression network (SMAR-Net),' which relaxes the need for ground-truth depth maps for training. Specifically, we design a two-stage network. The first stage is a disparity regressor, in which a regression network estimates disparity values from stacked stereo image pairs. Stereo image stacking method is a novel contribution as it not only contains the spatial appearances of stereo images but also implies matching correspondences with different disparity values. In the second stage, a synthetic left image is generated based on the left-right consistency assumption. Our network is trained by minimizing a hybrid loss function composed of a content loss and an adversarial loss. The content loss minimizes the average warping error between the synthetic images and the real ones. In contrast to the generative adversarial loss, our proposed adversarial loss penalizes mismatches using multiscale features. This constrains the synthetic image and real image as being pixelwise identical instead of just belonging to the same distribution. Furthermore, the combined utilization of multiscale feature extraction in both the content loss and adversarial loss further improves the adaptability of SMAR-Net in ill-posed regions. Experiments on multiple benchmark datasets show that SMAR-Net outperforms the current state-of-the-art self-supervised methods and achieves comparable outcomes to supervised methods. The source code can be accessed at: https://github.com/Dawnstar8411/SMAR-Net.

    Original languageEnglish
    Pages (from-to)4770-4783
    Number of pages14
    JournalIEEE Transactions on Cybernetics
    Volume51
    Issue number10
    DOIs
    Publication statusPublished - 1 Oct 2021

    Fingerprint

    Dive into the research topics of 'Self-Supervised Multiscale Adversarial Regression Network for Stereo Disparity Estimation'. Together they form a unique fingerprint.

    Cite this