Fine-grained classification via mixture of deep convolutional neural networks

Ge, Zong Yuan, Bewley, Alex, McCool, Christopher, Corke, Peter, Upcroft, Ben and Sanderson, Conrad (2016). Fine-grained classification via mixture of deep convolutional neural networks. In: IEEE Winter Conference on Applications of Computer Vision, WACV 2016. 2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016, Lake Placid, NY, U.S.A., (1-6). 7-10 March 2016. doi:10.1109/WACV.2016.7477700

Attached Files (Some files may be inaccessible until you login with your UQ eSpace credentials)
Name Description MIMEType Size Downloads
ge_fine_grained_classification_mixture_dcnn_wacv_2016.pdf ge_fine_grained_classification_mixture_dcnn_wacv_2016.pdf application/pdf 1.71MB 0

Author Ge, Zong Yuan
Bewley, Alex
McCool, Christopher
Corke, Peter
Upcroft, Ben
Sanderson, Conrad
Title of paper Fine-grained classification via mixture of deep convolutional neural networks
Conference name 2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016
Conference location Lake Placid, NY, U.S.A.
Conference dates 7-10 March 2016
Convener IEEE
Proceedings title IEEE Winter Conference on Applications of Computer Vision, WACV 2016
Place of Publication Piscataway, NJ, United States
Publisher Institute of Electrical and Electronics Engineers
Publication Year 2016
Year available 2016
Sub-type Fully published paper
DOI 10.1109/WACV.2016.7477700
Open Access Status Not Open Access
Start page 1
End page 6
Total pages 6
Collection year 2017
Language eng
Abstract/Summary We present a novel deep convolutional neural network (DCNN) system for fine-grained image classification, called a mixture of DCNNs (MixDCNN). The fine-grained image classification problem is characterised by large intra-class variations and small inter-class variations. To overcome these problems our proposed MixDCNN system partitions images into K subsets of similar images and learns an expert DCNN for each subset. The output from each of the K DCNNs is combined to form a single classification decision. In contrast to previous techniques, we provide a formulation to perform joint end-to-end training of the K DCNNs simultaneously. Extensive experiments, on three datasets using two network structures (AlexNet and GoogLeNet), show that the proposed MixDCNN system consistently outperforms other methods. It provides a relative improvement of 12.7% and achieves state-of-the-art results on two datasets
Keyword Australia
Expert systems
Feature extraction
Logic gates
Neural networks
Support vector machines
Training
Q-Index Code E1
Q-Index Status Provisional Code
Institutional Status UQ

 
Versions
Version Filter Type
Citation counts: Scopus Citation Count Cited 0 times in Scopus Article
Google Scholar Search Google Scholar
Created: Tue, 21 Jun 2016, 16:08:08 EST by Conrad Sanderson on behalf of School of Information Technol and Elec Engineering