Towards unsupervised semantic segmentation of street scenes from motion cues

Hajar Sadeghi Sokeh*, Stephen Gould

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    1 Citation (Scopus)

    Abstract

    Motion provides a rich source of information about the world. It can be used as an important cue to analyse the behaviour of objects in a scene and consequently identify interesting locations within it. In this paper, given an unannotated video sequence of a dynamic scene from fixed viewpoint, we first present a set of useful motion features that can be efficiently extracted at each pixel by optical flow. Using these features, we then develop an algorithm that can extract motion topic models and identify semantically significant regions and landmarks in a complex scene from a short video sequence. For example, by watching a street scene our algorithm can extract meaningful regions such as roads and important landmarks such as parking spots. Our method is robust to complicating factors such as shadows and occlusions.

    Original languageEnglish
    Title of host publicationProceedings of IVCNZ 2012 - The 27th Image and Vision Computing New Zealand Conference
    Pages232-237
    Number of pages6
    DOIs
    Publication statusPublished - 2012
    Event27th Image and Vision Computing New Zealand Conference, IVCNZ 2012 - Dunedin, New Zealand
    Duration: 26 Nov 201228 Nov 2012

    Publication series

    NameACM International Conference Proceeding Series

    Conference

    Conference27th Image and Vision Computing New Zealand Conference, IVCNZ 2012
    Country/TerritoryNew Zealand
    CityDunedin
    Period26/11/1228/11/12

    Fingerprint

    Dive into the research topics of 'Towards unsupervised semantic segmentation of street scenes from motion cues'. Together they form a unique fingerprint.

    Cite this