Image attribute adaptation

Han, Yahong, Yang, Yi, Ma, Zhigang, Shen, Haoquan, Sebe, Nicu and Zhou, Xiaofang (2014) Image attribute adaptation. IEEE Transactions on Multimedia, 16 4: 1115-1126. doi:10.1109/TMM.2014.2306092

Attached Files (Some files may be inaccessible until you login with your UQ eSpace credentials)
Name Description MIMEType Size Downloads

Author Han, Yahong
Yang, Yi
Ma, Zhigang
Shen, Haoquan
Sebe, Nicu
Zhou, Xiaofang
Title Image attribute adaptation
Journal name IEEE Transactions on Multimedia   Check publisher's open access policy
ISSN 1520-9210
Publication date 2014-06-01
Year available 2014
Sub-type Article (original research)
DOI 10.1109/TMM.2014.2306092
Open Access Status Not yet assessed
Volume 16
Issue 4
Start page 1115
End page 1126
Total pages 12
Place of publication Piscataway, NJ, United States
Publisher Institute of Electrical and Electronics Engineers
Language eng
Abstract Visual attributes can be considered as a middle-level semantic cue that bridges the gap between low-level image features and high-level object classes. Thus, attributes have the advantage of transcending specific semantic categories or describing objects across categories. Since attributes are often human-nameable and domain specific, much work constructs attribute annotations ad hoc or take them from an application-dependent ontology. To facilitate other applications with attributes, it is necessary to develop methods which can adapt a well-defined set of attributes to novel images. In this paper, we propose a framework for image attribute adaptation. The goal is to automatically adapt the knowledge of attributes from a well-defined auxiliary image set to a target image set, thus assisting in predicting appropriate attributes for target images. In the proposed framework, we use a non-linear mapping function corresponding to multiple base kernels to map each training images of both the auxiliary and the target sets to a Reproducing Kernel Hilbert Space (RKHS), where we reduce the mismatch of data distributions between auxiliary and target images. In order to make use of un-labeled images, we incorporate a semi-supervised learning process. We also introduce a robust loss function into our framework to remove the shared irrelevance and noise of training images. Experiments on two couples of auxiliary-target image sets demonstrate that the proposed framework has better performance of predicting attributes for target testing images, compared to three baselines and two state-of-the-art domain adaptation methods.
Keyword Domain adaptation
Image attributes
Multiple kernel learning
Robust multiple kernel regression
Q-Index Code C1
Q-Index Status Confirmed Code
Institutional Status UQ
Additional Notes Article # 6739133

Document type: Journal Article
Sub-type: Article (original research)
Collections: Official 2015 Collection
School of Information Technology and Electrical Engineering Publications
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 25 times in Thomson Reuters Web of Science Article | Citations
Scopus Citation Count Cited 25 times in Scopus Article | Citations
Google Scholar Search Google Scholar
Created: Tue, 03 Jun 2014, 11:34:52 EST by System User on behalf of School of Information Technol and Elec Engineering