Action recognition by exploring data distribution and feature correlation

Wang, Sen, Yang, Yi, Ma, Zhigang, Li, Xue, Pang, Chaoyi and Hauptmann, Alexander G. (2012). Action recognition by exploring data distribution and feature correlation. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, United States, (1370-1377). 16-21 June 2012. doi:10.1109/CVPR.2012.6247823

Attached Files (Some files may be inaccessible until you login with your UQ eSpace credentials)
Name Description MIMEType Size Downloads

Author Wang, Sen
Yang, Yi
Ma, Zhigang
Li, Xue
Pang, Chaoyi
Hauptmann, Alexander G.
Title of paper Action recognition by exploring data distribution and feature correlation
Conference name IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Conference location Providence, RI, United States
Conference dates 16-21 June 2012
Proceedings title 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)   Check publisher's open access policy
Journal name IEEE Conference on Computer Vision and Pattern Recognition. Proceedings   Check publisher's open access policy
Place of Publication Washington, DC, United States
Publisher I E E E Computer Society
Publication Year 2012
Sub-type Fully published paper
DOI 10.1109/CVPR.2012.6247823
ISBN 9781467312264
9781467312271
ISSN 1063-6919
Start page 1370
End page 1377
Total pages 8
Collection year 2013
Language eng
Abstract/Summary Human action recognition in videos draws strong research interest in computer vision because of its promising applications for video surveillance, video annotation, interactive gaming, etc. However, the amount of video data containing human actions is increasing exponentially, which makes the management of these resources a challenging task. Given a database with huge volumes of unlabeled videos, it is prohibitive to manually assign specific action types to these videos. Considering that it is much easier to obtain a small number of labeled videos, a practical solution for organizing them is to build a mechanism which is able to conduct action annotation automatically by leveraging the limited labeled videos. Motivated by this intuition, we propose an automatic video annotation algorithm by integrating semi-supervised learning and shared structure analysis into a joint framework for human action recognition. We apply our algorithm on both synthetic and realistic video datasets, including KTH [20], CareMedia dataset [1], Youtube action [12] and its extended version, UCF50 [2]. Extensive experiments demonstrate that the proposed algorithm outperforms the compared algorithms for action recognition. Most notably, our method has a very distinct advantage over other compared algorithms when we have only a few labeled samples.
Keyword Framework
Q-Index Code E1
Q-Index Status Confirmed Code
Institutional Status UQ

 
Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 6 times in Thomson Reuters Web of Science Article | Citations
Scopus Citation Count Cited 10 times in Scopus Article | Citations
Google Scholar Search Google Scholar
Access Statistics: 62 Abstract Views, 1 File Downloads  -  Detailed Statistics
Created: Sun, 09 Dec 2012, 00:58:57 EST by System User on behalf of Scholarly Publishing and Digitisation Service