Robust and efficient relative pose with a Multi-Camera system for autonomous driving in highly dynamic environments

Liu Liu*, Hongdong Li, Yuchao Dai, Quan Pan

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    44 Citations (Scopus)


    This paper studies the relative pose problem for autonomous vehicles driving in highly dynamic and possibly cluttered environments. This is a challenging scenario due to the existence of multiple, large, and independently moving objects in the environment, which often leads to an excessive portion of outliers and results in erroneous motion estimation. Existing algorithms cannot cope with such situations well. This paper proposes a new algorithm for relative pose estimation using a multi-camera system with multiple non-overlapping cameras. The method works robustly even when the number of outliers is overwhelming. By exploiting specific prior knowledge of the autonomous driving scene, we have developed an efficient 4-point algorithm for multi-camera relative pose estimation, which admits analytic solutions by solving a polynomial root-finding equation, and runs extremely fast (at about 0.5 μs per root). When the solver is used in combination with a new random sample consensus sampling scheme by exploiting the conjugate motion constraint, we are able to quickly prune unpromising hypotheses and significantly improve the chance of finding inliers. Experiments on synthetic data have validated the performance of the proposed algorithm. Tests on real data further confirm the method's practical relevance.

    Original languageEnglish
    Article number8053815
    Pages (from-to)2432-2444
    Number of pages13
    JournalIEEE Transactions on Intelligent Transportation Systems
    Issue number8
    Publication statusPublished - Aug 2018


    Dive into the research topics of 'Robust and efficient relative pose with a Multi-Camera system for autonomous driving in highly dynamic environments'. Together they form a unique fingerprint.

    Cite this