Real-time Depth Enhanced Monocular Odometry

Download: PDF.

“Real-time Depth Enhanced Monocular Odometry” by J. Zhang, M. Kaess, and S. Singh. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, IROS, (Chicago, IL), Sep. 2014, pp. 4973-4980.

Abstract

Visual odometry can be augmented by depth information such as provided by RGB-D cameras, or from lidars associated with cameras. However, such depth information can be limited by the sensors, leaving large areas in the visual images where depth is unavailable. Here, we propose a method to utilize the depth, even if sparsely available, in recovery of camera motion. In addition, the method utilizes depth by triangulation from the previously estimated motion, and salient visual features for which depth is unavailable. The core of our method is a bundle adjustment that refines the motion estimates in parallel by processing a sequence of images, in a batch optimization. We have evaluated our method in three sensor setups, one using an RGB-D camera, and two using combinations of a camera and a 3D lidar. Our method is rated #2 on the KITTI odometry benchmark irrespective of sensing modality, and is rated #1 among visual odometry methods.

Download: PDF.

BibTeX entry:

@inproceedings{Zhang14iros,
   author = {J. Zhang and M. Kaess and S. Singh},
   title = {Real-time Depth Enhanced Monocular Odometry},
   booktitle = {IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, IROS},
   pages = {4973-4980},
   address = {Chicago, IL},
   month = {Sep},
   year = {2014}
}
Last updated: October 10, 2016