Moving object detection and segmentation in urban environments from a moving platform

Dingfu Zhou*, Vincent Frémont, Benjamin Quost, Yuchao Dai, Hongdong Li

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    42 Citations (Scopus)


    This paper proposes an effective approach to detect and segment moving objects from two time-consecutive stereo frames, which leverages the uncertainties in camera motion estimation and in disparity computation. First, the relative camera motion and its uncertainty are computed by tracking and matching sparse features in four images. Then, the motion likelihood at each pixel is estimated by taking into account the ego-motion uncertainty and disparity in computation procedure. Finally, the motion likelihood, color and depth cues are combined in the graph-cut framework for moving object segmentation. The efficiency of the proposed method is evaluated on the KITTI benchmarking datasets, and our experiments show that the proposed approach is robust against both global (camera motion) and local (optical flow) noise. Moreover, the approach is dense as it applies to all pixels in an image, and even partially occluded moving objects can be detected successfully. Without dedicated tracking strategy, our approach achieves high recall and comparable precision on the KITTI benchmarking sequences.

    Original languageEnglish
    Pages (from-to)76-87
    Number of pages12
    JournalImage and Vision Computing
    Publication statusPublished - Dec 2017


    Dive into the research topics of 'Moving object detection and segmentation in urban environments from a moving platform'. Together they form a unique fingerprint.

    Cite this