Unsupervised feature analysis with class margin optimization

Wang, Sen, Nie, Feiping, Chang, Xiaojun, Yao, Lina, Li, Xue and Sheng, Quan Z. (2015). Unsupervised feature analysis with class margin optimization. In: Annalisa Appice, Pedro Pereira Rodrigues, Vítor Santos Costa, Carlos Soares, João Gama and Alípio Jorge, Machine Learning and Knowledge Discovery in Databases, Part I. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), Porto, Portugal, (383-398). 7-11 Sep 2015. doi:10.1007/978-3-319-23528-8_24

Attached Files (Some files may be inaccessible until you login with your UQ eSpace credentials)
Name Description MIMEType Size Downloads

Author Wang, Sen
Nie, Feiping
Chang, Xiaojun
Yao, Lina
Li, Xue
Sheng, Quan Z.
Title of paper Unsupervised feature analysis with class margin optimization
Conference name European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)
Conference location Porto, Portugal
Conference dates 7-11 Sep 2015
Proceedings title Machine Learning and Knowledge Discovery in Databases, Part I   Check publisher's open access policy
Journal name Lecture Notes in Computer Science   Check publisher's open access policy
Place of Publication Cham, Switzerland
Publisher Springer
Publication Year 2015
Year available 2015
Sub-type Fully published paper
DOI 10.1007/978-3-319-23528-8_24
Open Access Status Not Open Access
ISBN 9783319235271
ISSN 0302-9743
Editor Annalisa Appice
Pedro Pereira Rodrigues
Vítor Santos Costa
Carlos Soares
João Gama
Alípio Jorge
Volume 9284
Start page 383
End page 398
Total pages 16
Chapter number 24
Total chapters 43
Language eng
Formatted Abstract/Summary
Unsupervised feature selection has been attracting research attention in the communities of machine learning and data mining for decades. In this paper, we propose an unsupervised feature selection method seeking a feature coefficient matrix to select the most distinctive features. Specifically, our proposed algorithm integrates the Maximum Margin Criterion with a sparsity-based model into a joint framework, where the class margin and feature correlation are taken into account at the same time. To maximize the total data separability while preserving minimized within-class scatter simultaneously, we propose to embed K-means into the framework generating pseudo class label information in a scenario of unsupervised feature selection. Meanwhile, a sparsity-based model, ℓ2,p-norm, is imposed to the regularization term to effectively discover the sparse structures of the feature coefficient matrix. In this way, noisy and irrelevant features are removed by ruling out those features whose corresponding coefficients are zeros. To alleviate the local optimum problem that is caused by random initializations of K-means, a convergence guaranteed algorithm with an updating strategy for the clustering indicator matrix, is proposed to iteratively chase the optimal solution. Performance evaluation is extensively conducted over six benchmark data sets. From our comprehensive experimental results, it is demonstrated that our method has superior performance against all other compared approaches.
Keyword Unsupervised feature selection
Maximum margin criterion
Sparse structure learning
Embedded K-means clustering
Q-Index Code C1
Q-Index Status Provisional Code
Institutional Status UQ

Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in Thomson Reuters Web of Science Article
Scopus Citation Count Cited 4 times in Scopus Article | Citations
Google Scholar Search Google Scholar
Created: Sun, 06 Dec 2015, 10:23:13 EST by System User on behalf of School of Information Technol and Elec Engineering