Visual-inertial motion priors for robust monocular SLAM

Usman Qayyum*, Jonghyuk Kim

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Abstract

    Monocular visual SLAM approaches are mostly constrained in their performance due to general motion model and availability of true scale information. We proposed an approach which improves the motion prediction step of visual SLAM and results in better estimation of map scale. The approach utilizes the short term accuracy of inertial velocity with visual orientation to estimate refined motion priors. These motion priors are fused with sparse number of 3D map features to constraint the positional drift of moving platform. Experimental results are presented on large scale outdoor environment, yielding robust performance and better observability of map scale by monocular SLAM.

    Original languageEnglish
    Title of host publicationTowards Autonomous Robotic Systems - 12th Annual Conference, TAROS 2011, Proceedings
    Pages430-431
    Number of pages2
    DOIs
    Publication statusPublished - 2011
    Event12th Annual Conference on Towards Autonomous Robotic Systems, TAROS 2011 - Sheffield, United Kingdom
    Duration: 31 Aug 20112 Sept 2011

    Publication series

    NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume6856 LNAI
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349

    Conference

    Conference12th Annual Conference on Towards Autonomous Robotic Systems, TAROS 2011
    Country/TerritoryUnited Kingdom
    CitySheffield
    Period31/08/112/09/11

    Fingerprint

    Dive into the research topics of 'Visual-inertial motion priors for robust monocular SLAM'. Together they form a unique fingerprint.

    Cite this