Generating explanations from support vector machines for psychological classifications

Song, Insu and Diederich, Joachim (2014). Generating explanations from support vector machines for psychological classifications. In Margaret Lech, Peter Yellowlees, Insu Song and Joachim Diederich (Ed.), Mental health informatics (pp. 125-150) Heidelberg, Germany: Springer. doi:10.1007/978-3-642-38550-6_7


Author Song, Insu
Diederich, Joachim
Title of chapter Generating explanations from support vector machines for psychological classifications
Title of book Mental health informatics
Place of Publication Heidelberg, Germany
Publisher Springer
Publication Year 2014
Sub-type Article (original research)
DOI 10.1007/978-3-642-38550-6_7
Open Access Status
Year available 2014
Series Studies in Computational Intelligence
ISBN 9783642385490
9783642385506
ISSN 1860-949X
1860-9503
Editor Margaret Lech
Peter Yellowlees
Insu Song
Joachim Diederich
Volume number 491
Chapter number 7
Start page 125
End page 150
Total pages 26
Total chapters 13
Collection year 2015
Language eng
Abstract/Summary An explanation capability is crucial in security-sensitive domains, such as medical applications. Although support vector machines (SVMs) have shown superior performance in a range of classification and regression tasks, SVMs, like artificial neural networks (ANNs), lack an explanatory capability. There is a significant literature on obtaining human-comprehensible rules from SVMs and ANNs in order to explain "how" a decision was made or "why" a certain result was achieved. This chapter proposes a novel approach to SVM classifiers. The experiments described in this chapter involve a first attempt to generate textual and visual explanations for classification results using multimedia content of various type: poems expressing positive or negative emotion, autism descriptions, and facial expressions, including those with medical relevance (facial palsy). Learned model parameters are analyzed to select important features, and filtering is applied to select feature subsets of explanatory value. The explanation components are used to generate textual summaries of classification results. We show that the explanations are consistent and that the accuracy of SVM models is bounded by the accuracy of explanation components. The results show that the generated explanations display a high level of fidelity and can generate textual summaries with an error rate of less than 35 %.
Q-Index Code B1
Q-Index Status Confirmed Code
Institutional Status UQ

 
Versions
Version Filter Type
Citation counts: Scopus Citation Count Cited 0 times in Scopus Article
Google Scholar Search Google Scholar
Created: Tue, 04 Feb 2014, 00:20:22 EST by System User on behalf of School of Information Technol and Elec Engineering