Tracking with Multiple Cameras for Video Surveillance

Bhuyan, M. K., Lovell, B. C. and Bigdeli, A. (2008). Tracking with Multiple Cameras for Video Surveillance. In: Digital Image Computing Techniques and Applications. 9th Biennial Conference of the Australian Pattern Recognition Society, Glenelg, SA, Australia, (592-599). 3-5 December 2007. doi:10.1109/DICTA.2007.4426852

Attached Files (Some files may be inaccessible until you login with your UQ eSpace credentials)
Name Description MIMEType Size Downloads
Tracking_with_Multiple_Cameras_for_Video_Surveillance.pdf Full text application/pdf 202.47KB 311

Author Bhuyan, M. K.
Lovell, B. C.
Bigdeli, A.
Title of paper Tracking with Multiple Cameras for Video Surveillance
Conference name 9th Biennial Conference of the Australian Pattern Recognition Society
Conference location Glenelg, SA, Australia
Conference dates 3-5 December 2007
Proceedings title Digital Image Computing Techniques and Applications
Journal name Proceedings - Digital Image Computing Techniques and Applications: 9th Biennial Conference of the Australian Pattern Recognition Society, DICTA 2007
Place of Publication Glenelg, SA, Australia
Publisher IEEE Computer Society
Publication Year 2008
Year available 2008
Sub-type Fully published paper
DOI 10.1109/DICTA.2007.4426852
Open Access Status File (Author Post-print)
ISBN 0769530672
Start page 592
End page 599
Total pages 8
Collection year 2009
Language eng
Abstract/Summary The large shape variability and partial occlusions challenge most object detection and tracking methods for nonrigid targets such as pedestrians. Single camera tracking is limited in the scope of its applications because of the limited field of view (FOV) of a camera. This initiates the need for a multiple-camera system for completely monitoring and tracking a target, especially in the presence of occlusion. When the object is viewed with multiple cameras, there is a fair chance that it is not occluded simultaneously in all the cameras. In this paper, we developed a method for the fusion of tracks obtained from two cameras placed at two different positions. First, the object to be tracked is identified on the basis of shape information measured by MPEG-7 ART shape descriptor. After this, single camera tracking is performed by the unscented Kalman filter approach and finally the tracks from the two cameras are fused. A sensor network model is proposed to deal with the situations in which the target moves out of the field of view of a camera and reenters after sometime. Experimental results obtained demonstrate the effectiveness of our proposed scheme for tracking objects under occlusion.
Subjects 280203 Image Processing
280207 Pattern Recognition
280208 Computer Vision
810107 National Security
970108 Expanding Knowledge in the Information and Computing Sciences
Keyword detection and tracking methods
multiple-camera system
fusion of tracks
Q-Index Code EX
Q-Index Status Provisional Code

 
Versions
Version Filter Type
Citation counts: Scopus Citation Count Cited 2 times in Scopus Article | Citations
Google Scholar Search Google Scholar
Created: Wed, 06 May 2009, 10:31:50 EST by Dr Ildiko Horvath on behalf of School of Information Technol and Elec Engineering