TY - GEN
T1 - Seamless aiding of inertial-slam using Visual Directional Constraints from a monocular vision
AU - Qayyum, Usman
AU - Kim, Jonghyuk
PY - 2012
Y1 - 2012
N2 - Inertial-SLAM has been actively studied as it can provide all-terrain navigational capability with full six degrees-of-freedom information to autonomous robots. With the recent availability of low-cost inertial and vision sensors, a light-weight and accurate mapping system could be achieved for many robotic tasks such as land/aerial explorations. The key challenge toward this is in the availability of reliable and constant aiding information to correct the inertial system which is intrinsically unstable. The existing approaches have been relying on feature-based maps, which require accurate depth-resolution process to correct the inertial units properly where the aiding rate is highly dependent on the map density. In this work we propose to directly integrate the visual odometry to the inertial system by fusing the scale ambiguous translation vectors as Visual Directional Constraints (VDC) on vehicle motion at high update rates, while the 3D map being still used to constrain the longitudinal drifts but in a relaxed way. In this way, the visual odometry information can be seamlessly fused to inertial system by resolving the scale ambiguity problem between inertial and monocular camera thus achieving a reliable and constant aiding. The proposed approach is evaluated on SLAM benchmark dataset and simulated environment, showing a more stable and consistent performance of monocular inertial-SLAM.
AB - Inertial-SLAM has been actively studied as it can provide all-terrain navigational capability with full six degrees-of-freedom information to autonomous robots. With the recent availability of low-cost inertial and vision sensors, a light-weight and accurate mapping system could be achieved for many robotic tasks such as land/aerial explorations. The key challenge toward this is in the availability of reliable and constant aiding information to correct the inertial system which is intrinsically unstable. The existing approaches have been relying on feature-based maps, which require accurate depth-resolution process to correct the inertial units properly where the aiding rate is highly dependent on the map density. In this work we propose to directly integrate the visual odometry to the inertial system by fusing the scale ambiguous translation vectors as Visual Directional Constraints (VDC) on vehicle motion at high update rates, while the 3D map being still used to constrain the longitudinal drifts but in a relaxed way. In this way, the visual odometry information can be seamlessly fused to inertial system by resolving the scale ambiguity problem between inertial and monocular camera thus achieving a reliable and constant aiding. The proposed approach is evaluated on SLAM benchmark dataset and simulated environment, showing a more stable and consistent performance of monocular inertial-SLAM.
UR - http://www.scopus.com/inward/record.url?scp=84872314878&partnerID=8YFLogxK
U2 - 10.1109/IROS.2012.6385830
DO - 10.1109/IROS.2012.6385830
M3 - Conference contribution
SN - 9781467317375
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 4205
EP - 4210
BT - 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2012
T2 - 25th IEEE/RSJ International Conference on Robotics and Intelligent Systems, IROS 2012
Y2 - 7 October 2012 through 12 October 2012
ER -