Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments

Download: PDF.

“Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments” by S. Yang, Y. Song, M. Kaess, and S. Scherer. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, IROS, (Daejeon, Korea), Oct. 2016, pp. 1222-1229.

Abstract

Existing simultaneous localization and mapping (SLAM) algorithms are not robust in challenging low-texture environments because there are only few salient features. The resulting sparse or semi-dense map also conveys little information for motion planning. Though some work utilize plane or scene layout for dense map regularization, they require decent state estimation from other sources. In this paper, we propose real-time monocular plane SLAM to demonstrate that scene understanding could improve both state estimation and dense mapping especially in low-texture environments. The plane measurements come from a pop-up 3D plane model applied to each single image. We also combine planes with point based SLAM to improve robustness. On a public TUM dataset, our algorithm generates a dense semantic 3D model with pixel depth error of 6.2 cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our method creates a much better 3D model with state estimation error of 0.67%.

Download: PDF.

BibTeX entry:

@inproceedings{Yang16iros,
   author = {S. Yang and Y. Song and M. Kaess and S. Scherer},
   title = {Pop-up {SLAM}: Semantic Monocular Plane {SLAM} for Low-texture
	Environments},
   booktitle = {IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, IROS},
   pages = {1222-1229},
   address = {Daejeon, Korea},
   month = oct,
   year = {2016}
}
Last updated: August 17, 2017