Robot localization using 3D-models and an off-board monocular camera

Hoermann, Stefan and Borges, Paulo Vinicius Koerich (2011). Robot localization using 3D-models and an off-board monocular camera. In: Proceedings of the IEEE International Conference on Computer Vision. 2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011, Barcelona, Spain, (1006-1013). 6-13 November 2011. doi:10.1109/ICCVW.2011.6130361

Attached Files (Some files may be inaccessible until you login with your UQ eSpace credentials)
Name Description MIMEType Size Downloads

Author Hoermann, Stefan
Borges, Paulo Vinicius Koerich
Title of paper Robot localization using 3D-models and an off-board monocular camera
Conference name 2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011
Conference location Barcelona, Spain
Conference dates 6-13 November 2011
Proceedings title Proceedings of the IEEE International Conference on Computer Vision
Journal name 2011 Ieee International Conference On Computer Vision Workshops (iccv Workshops)
Place of Publication Piscataway, NJ, United States
Publisher IEEE (Institute for Electrical and Electronic Engineers)
Publication Year 2011
Sub-type Fully published paper
DOI 10.1109/ICCVW.2011.6130361
ISBN 9781467300636
Start page 1006
End page 1013
Total pages 8
Collection year 2012
Language eng
Abstract/Summary Robot localization is one of the most common tasks in autonomous systems. Unlike most vision-based methods which perform localization from cameras mounted on the robot, we propose a system which uses an off-board camera to achieve localization. For this purpose, we use a similarity measure between a camera image and a synthetic image generated using a 3D object model. In contrast to other methods, a previous training period is not necessary as we use a model of shading appearance based on the surface curvature of the 3D model. Assuming a reasonably planar area which is observed by an off-board camera, an initial position estimation in the camera image (based on 2D blob tracking) makes it possible for the 3D model to be rendered near the object in the image. From this initial guess, the model is then compared against the real image, such that it can converge to the true vehicle pose. Experiments are performed with a forklift-like robot in an outdoor industrial environment, considering different illumination conditions. The results are compared against a laser-based ground-truth, and illustrate the applicability of the method with a low average error.
Keyword Pose estimation
Models
Q-Index Code E1
Q-Index Status Provisional Code
Institutional Status UQ
Additional Notes Article number 6130361

 
Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in Thomson Reuters Web of Science Article
Scopus Citation Count Cited 4 times in Scopus Article | Citations
Google Scholar Search Google Scholar
Created: Mon, 25 Jun 2012, 09:52:29 EST by Paulo Borges on behalf of School of Information Technol and Elec Engineering