Audio-visual speech timing sensitivity is enhanced in cluttered conditions.

Roseboom, Warrick, Nishida, Shin'ya, Fujisaki, Waka. and Arnold, Derek H. (2011) Audio-visual speech timing sensitivity is enhanced in cluttered conditions.. PLoS One, 6 4: 18309-1-18309-8. doi:10.1371/journal.pone.0018309


Author Roseboom, Warrick
Nishida, Shin'ya
Fujisaki, Waka.
Arnold, Derek H.
Title Audio-visual speech timing sensitivity is enhanced in cluttered conditions.
Journal name PLoS One   Check publisher's open access policy
ISSN 1932-6203
Publication date 2011-04-06
Sub-type Article (original research)
DOI 10.1371/journal.pone.0018309
Open Access Status DOI
Volume 6
Issue 4
Start page 18309-1
End page 18309-8
Total pages 8
Place of publication San Francisco, CA, United States
Publisher Public Library of Science
Collection year 2012
Language eng
Abstract Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room.
Keyword Audiovisual Aids
Speech Disorders
Sensory system parameters
Q-Index Code C1
Q-Index Status Confirmed Code
Institutional Status UQ

Document type: Journal Article
Sub-type: Article (original research)
Collections: Official 2012 Collection
School of Psychology Publications
 
Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 16 times in Thomson Reuters Web of Science Article | Citations
Scopus Citation Count Cited 17 times in Scopus Article | Citations
Google Scholar Search Google Scholar
Created: Mon, 19 Mar 2012, 00:56:17 EST by Mrs Alison Pike on behalf of School of Psychology