Breaking multiple forms of view invariance

Wallis, Guy (2007). Breaking multiple forms of view invariance. In: Journal of Vision. Vision Sciences Society Meeting, Sarasota, FL., (article 334). 11 - 16 May, 2007. doi:10.1167/7.9.334

Author Wallis, Guy
Title of paper Breaking multiple forms of view invariance
Conference name Vision Sciences Society Meeting
Conference location Sarasota, FL.
Conference dates 11 - 16 May, 2007
Proceedings title Journal of Vision   Check publisher's open access policy
Place of Publication Rockville, MD
Publisher ARVO - The Association for Research in Vision and Ophthalmology
Publication Year 2007
Sub-type Fully published paper
DOI 10.1167/7.9.334
Open Access Status DOI
ISSN 1534-7362
Volume 7
Issue 9
Start page article 334
Total pages 1
Language eng
Abstract/Summary Object recognition based solely on spatial characteristics can only hope to provide limited tolerance to variations in appearance. In everyday life natural objects may alter their appearance quite dramatically as a result of changes in viewpoint, distance, orientation and illumination etc. To combat this shortcoming, it has been proposed that the visual system may learn to associate disparate views of objects on the basis of their temporal rather than spatial characteristics. The reasoning behind this suggestion is that views which regularly occur in close temporal proximity are likely to be views of a transforming object. Previous experimental work has confirmed that invariance learning across depth rotations and changes in fixation is affected by the temporal characteristics of stimulus views. In this paper I describe how two other transformation types: fronto-parallel rotation and illumination are also affected by temporal association. Observers viewed sequences of faces undergoing rotation in the image plane or a change in illumination generated by running a light source around the face's vertical mid-line. Unbeknown to the observers, some of the faces changed their identity as the transformation took place. Two experiments were then run to ascertain whether the manipulation had lead to the two endpoint views becoming regarded as valid views of a single face. In the first test participants were required to discriminate true versus mixed identity transformation sequences. In the second, discrimination performance was measured via a two-view, same-different task. Both experiments revealed compelling evidence for the predicted effect of manipulating the temporal characteristics of the face views. The results establish the temporal association mechanism as a general purpose heuristic for coping with a diverse range of invariance learning. They also serve to undermine models of human object recognition which propose the existence of any general purpose view transformation or shape reconstruction system.
Subjects 170112 Sensory Processes, Perception and Performance
Q-Index Code EX
Q-Index Status Provisional Code
Institutional Status Unknown

Version Filter Type
Citation counts: Google Scholar Search Google Scholar
Created: Tue, 31 Mar 2009, 12:01:58 EST by Ms Sarada Rao on behalf of School of Human Movement and Nutrition Sciences