Visual Learning for Mobile Robot Localisation

David Prasser (2008). Visual Learning for Mobile Robot Localisation PhD Thesis, School of Information Technology and Electrical Engineering, The University of Queensland.

       
Attached Files (Some files may be inaccessible until you login with your UQ eSpace credentials)
Name Description MIMEType Size Downloads
n3369618_phd_abstract.pdf n3369618_phd_abstract.pdf application/pdf 30.21KB 16
n3369618_phd_content.pdf n3369618_phd_content.pdf application/pdf 16.73MB 3
n3369618_phd_front.pdf n3369618_phd_front.pdf application/pdf 97.13KB 107
n3369618_phd_totalthesis.pdf n3369618_phd_totalthesis.pdf application/pdf 16.77MB 7
Author David Prasser
Thesis Title Visual Learning for Mobile Robot Localisation
School, Centre or Institute School of Information Technology and Electrical Engineering
Institution The University of Queensland
Publication date 2008-03
Thesis type PhD Thesis
Supervisor Gallagher, Marcus R.
Wyeth, Gordon F.
Subjects 290000 Engineering and Technology
Formatted abstract This thesis investigates visual information processing methods that a mobile robot can use to track its location. The methods learn the visual appearance of different places rather attempting to measure the robot’s position relative to landmarks. The methods are designed to interface with a biologically inspired algorithm for reasoning about robot location.
Localisation by visual appearance implies a processing scheme which converts camera data to position information without employing a geometric world model. The objective in such an approach is to sidestep the difficulties associated with interpreting and more significantly building geometric environment models with vision. The model building problem can become particularly difficult when it occurs simultaneously with localisation (SLAM). Such a situation occurs when a robot is required to explore an environment without becoming lost in the process. Two paradigms of visual processing for this problem are examined: view and feature learning.
View learning is investigated in both indoor and outdoor scenarios, showing that simple processing of visual appearance can be used as an external sensor in a robotic SLAM system. A significant mapping and localisation experiment can be successfully performed using methods as simple as comparing very low resolution images. Three different image representations are considered for both indoor and outdoor environments: low resolution greyscales; edge features; and colour histograms. The consistency and usefulness of these representations have been investigated by examining the rates of accuracy and recognition between two indoor experiments separated by a week. Complex cell edge features are shown to be a sensible choice for indoor environments and colour histograms are shown to provide a good generic descriptor in outdoor situations.
View learning is used in many visual appearance based SLAM methods. View learning is tested in both indoor and outdoor environments, and found to be practicable in the short term in both situations. The indoor experiments examine the sensitivity to changes in the environment during changes between daytime and night-time which indicate that, given appropriate pre-processing, view learning is stable enough to construct long term maps. There is trade-off between recognition and accuracy as the threshold of recognition is varied. View learning does not need to be an exact affair – views can ambiguously code multiple places and places can respond to multiple views. The important part of the process is the conversion of visual information into a sparse vector that it is consistent with robot position.
The implementation details reveal several points about appearance based SLAM. Without panoramic sensors it is unreasonable to try an construct a map that that is independent of the robot’s heading. Redundancy or ambiguity in the environment (which can be amplified by the need for spatial generalisation) makes spatial filtering a necessity. The proximity of objects to the robot affects its generalisation capabilities and the robot’s movement behaviour will affect its ability to remain localised.
Feature learning provides an alternative to view learning, avoiding the possibility of an expanding view memory. The ideas developed in sparse coding literature appear to provide a principled method for developing relevant image features for a particular type of environment. In particular, sparse coding allows for a finite feature set that is adapted to the statistics of its environment. Learning the relationship between individual image features and locations in the environment is a more difficult task, as there is more positional ambiguity present in the system. However the ambiguity introduced can be filtered to produce consistent robot positions.
By using rapid learning and recognition of visual appearance, coupled with appropriate methods for spatial generalisation, robot position can be reliably tracked without a geometric model of the environment.



 
Citation counts: Google Scholar Search Google Scholar
Access Statistics: 195 Abstract Views, 133 File Downloads  -  Detailed Statistics
Created: Fri, 23 May 2008, 15:04:15 EST by Noela Stallard on behalf of Library - Information Access Service