VISUAL LOCALISATION IN DYNAMIC NON-UNIFORM LIGHTING

Stephen Nuske (2008). VISUAL LOCALISATION IN DYNAMIC NON-UNIFORM LIGHTING PhD Thesis, School of Information Technol and Elec Engineering, The University of Queensland.

       
Attached Files (Some files may be inaccessible until you login with your UQ eSpace credentials)
Name Description MIMEType Size Downloads
n40116224_phd_abstract.pdf n40116224_phd_abstract.pdf application/pdf 12.71KB 7
n40116224_phd_totalthesis.pdf n40116224_phd_totalthesis.pdf application/pdf 11.64MB 14
videos.zip Zip file of Videos Click to show the corresponding preview/stream application/x-zip 63.18MB 1
Author Stephen Nuske
Thesis Title VISUAL LOCALISATION IN DYNAMIC NON-UNIFORM LIGHTING
School, Centre or Institute School of Information Technol and Elec Engineering
Institution The University of Queensland
Publication date 2008-10
Thesis type PhD Thesis
Total pages 213
Total colour pages 61
Total black and white pages 152
Collection year 2009
Subjects 09 Engineering
Abstract/Summary Dynamic non-uniform lighting conditions, prevalent in many field robot applications, cause drastic changes in the visual information captured by camera images, resulting in major difficulties for mobile robots attempting to localise visually. Most current solutions to the problem rely on extracting visual information from images that is decoupled from the effects of lighting. This is not possible in many situations. Chrominance information is often cited as having some invariance to lighting changes, which is confirmed by experiments in this thesis. However, in the bland application environments investigated,chrominance is not a pertinent metric, indicating that chrominance is not the complete solution to the lighting problem. Descriptions of the intensity gradient are also cited as having robustness to lighting changes. Many descriptions of image-point features are based on the intensity gradient and are commonly used as a basis for visual localisation. However, the non-uniform effects of lighting – shadows and shading – are tangled into the intensity gradient, making these descriptions sensitive to non-uniform lighting changes. Experiments are presented which reveal that image-point features recorded at one time of the day cannot be reliably matched with images captured only one or two hours later, after typical changes in sunlight. It appears that autonomously building visual maps which permit geometric localisation in many lighting conditions remains an unsolved problem. Therefore, this thesis develops systems based on manually generated maps which are created a priori and ensure that only permanent, invariant, information is included within the map and allows reliable localisation to be achieved in many conditions. The first proposed visual localisation system is for autonomous ground vehicles operating at outdoor industrial sites. The system avoids the problem of mapping from fluctuating visual information by using a professionally-surveyed 3D-edge map of the permanent buildings. The vehicles are fitted with fish-eye cameras that often have direct sunlight in the field of view, causing an issue of camera exposure – dealt with by using an intelligent exposure control algorithm. Results from the system show accurate localisation during the full range of lighting conditions experienced over a day. The second visual localisation framework discussed is for submarines navigating underwater structures, where the only light source is a spotlight mounted on the vehicle. The moving vehicle and hence changing incident angle of the light source cause major variations in the appearance of the structure. This makes it difficult to employ traditional tracking techniques. The proposed localisation system uses the novel idea of incorporating a light source model. The light model is used to render synthetic images of scene – which accurately recreate the non-uniform lighting effects. The synthetic images are compared with the real camera image to localise the vehicle. The idea of using a light model is partly motivated by the human visual system’s understanding of the light source within a scene and is also motivated by the limitations of the traditional approaches to factor-out lighting. Using a light model within a visual localisation system enables a more natural link between the internal environment representation and the image and is demonstrated to allow successful localisation in this difficult visual scenario. The results of the two proposed localisation systems are encouraging, given the extremely challenging dynamic non-uniform lighting in each environment. Both systems have attracted the interest of industry partners and the projects will continue to be developed into the future, with the goal of progressing them into fully functioning robotic systems.
Formatted abstract Dynamic non-uniform lighting conditions, prevalent in many field robot applications, cause drastic changes in the visual information captured by camera images, resulting in major difficulties for mobile robots attempting to localise visually. Most current solutions to the problem rely on extracting visual information from images that is decoupled from the effects of lighting. This is not possible in many situations. Chrominance information is often cited as having some invariance to lighting changes, which is confirmed by experiments in this thesis. However, in the bland application environments investigated,chrominance is not a pertinent metric, indicating that chrominance is not the complete solution to the lighting problem. Descriptions of the intensity gradient are also cited as having robustness to lighting changes. Many descriptions of image-point features are based on the intensity gradient and are commonly used as a basis for visual localisation. However, the non-uniform effects of lighting
– shadows and shading – are tangled into the intensity gradient, making these descriptions sensitive to non-uniform lighting changes. Experiments are presented which reveal that image-point features
recorded at one time of the day cannot be reliably matched with images captured only one or two hours later, after typical changes in sunlight.
It appears that autonomously building visual maps which permit geometric localisation in many lighting conditions remains an unsolved problem. Therefore, this thesis develops systems based
on manually generated maps which are created a priori and ensure that only permanent, invariant, information is included within the map and allows reliable localisation to be achieved in many conditions.
The first proposed visual localisation system is for autonomous ground vehicles operating at outdoor industrial sites. The system avoids the problem of mapping from fluctuating visual information by
using a professionally-surveyed 3D-edge map of the permanent buildings. The vehicles are fitted with fish-eye cameras that often have direct sunlight in the field of view, causing an issue of camera exposure – dealt with by using an intelligent exposure control algorithm. Results from the system
show accurate localisation during the full range of lighting conditions experienced over a day.
The second visual localisation framework discussed is for submarines navigating underwater structures, where the only light source is a spotlight mounted on the vehicle. The moving vehicle
and hence changing incident angle of the light source cause major variations in the appearance of the structure. This makes it difficult to employ traditional tracking techniques. The proposed
localisation system uses the novel idea of incorporating a light source model. The light model is used to render synthetic images of scene – which accurately recreate the non-uniform lighting effects. The synthetic images are compared with the real camera image to localise the vehicle. The
idea of using a light model is partly motivated by the human visual system’s understanding of the light source within a scene and is also motivated by the limitations of the traditional approaches to factor-out lighting. Using a light model within a visual localisation system enables a more natural
link between the internal environment representation and the image and is demonstrated to allow successful localisation in this difficult visual scenario.
The results of the two proposed localisation systems are encouraging, given the extremely challenging dynamic non-uniform lighting in each environment. Both systems have attracted the interest of industry partners and the projects will continue to be developed into the future, with the goal of
progressing them into fully functioning robotic systems.
Additional Notes 16,18,22,42,46,49,58,59,60,64,69,70,71,72,73,74,80,87,90,91, 93,95,96,101,105,111,114,115,119,120,121,122,123,126,127,128 131,133,135,139,140,141,142,144,147,151,157,158,161,166,167 169,170,171,172,173,174,176,177,202,207

 
Citation counts: Google Scholar Search Google Scholar
Access Statistics: 204 Abstract Views, 28 File Downloads  -  Detailed Statistics
Created: Tue, 21 Oct 2008, 16:07:37 EST by Catherine Kelley on behalf of Library - Information Access Service