Generation of silver standard concept annotations from Biomedical texts with special relevance to phenotypes

Oellrich, Anika, Collier, Nigel, Smedley, Damian and Groza, Tudor (2015) Generation of silver standard concept annotations from Biomedical texts with special relevance to phenotypes. PLoS ONE, 10 1: . doi:10.1371/journal.pone.0116040


Author Oellrich, Anika
Collier, Nigel
Smedley, Damian
Groza, Tudor
Title Generation of silver standard concept annotations from Biomedical texts with special relevance to phenotypes
Journal name PLoS ONE   Check publisher's open access policy
ISSN 1932-6203
Publication date 2015-01-21
Sub-type Article (original research)
DOI 10.1371/journal.pone.0116040
Open Access Status DOI
Volume 10
Issue 1
Total pages 17
Place of publication San Francisco, CA United States
Publisher Public Library of Science
Collection year 2016
Language eng
Formatted abstract
Electronic health records and scientific articles possess differing linguistic characteristics that may impact the performance of natural language processing tools developed for one or the other. In this paper, we investigate the performance of four extant concept recognition tools: the clinical Text Analysis and Knowledge Extraction System (cTAKES), the National Center for Biomedical Ontology (NCBO) Annotator, the Biomedical Concept Annotation System (BeCAS) and MetaMap. Each of the four concept recognition systems is applied to four different corpora: the i2b2 corpus of clinical documents, a PubMed corpus of Medline abstracts, a clinical trails corpus and the ShARe/CLEF corpus. In addition, we assess the individual system performances with respect to one gold standard annotation set, available for the ShARe/CLEF corpus. Furthermore, we built a silver standard annotation set from the individual systems' output and assess the quality as well as the contribution of individual systems to the quality of the silver standard. Our results demonstrate that mainly the NCBO annotator and cTAKES contribute to the silver standard corpora (F1-measures in the range of 21% to 74%) and their quality (best F1-measure of 33%), independent from the type of text investigated. While BeCAS and MetaMap can contribute to the precision of silver standard annotations (precision of up to 42%), the F1-measure drops when combined with NCBO Annotator and cTAKES due to a low recall. In conclusion, the performances of individual systems need to be improved independently from the text types, and the leveraging strategies to best take advantage of individual systems' annotations need to be revised. The textual content of the PubMed corpus, accession numbers for the clinical trials corpus, and assigned annotations of the four concept recognition systems as well as the generated silver standard annotation sets are available from http://purl.org/phenotype/resources. The textual content of the ShARe/CLEF (https://sites.google.com/site/shareclefehealth/data) and i2b2 (https://i2b2.org/NLP/DataSets/) corpora needs to be requested with the individual corpus providers.
Q-Index Code C1
Q-Index Status Confirmed Code
Institutional Status UQ

Document type: Journal Article
Sub-type: Article (original research)
Collections: Official 2016 Collection
School of Information Technology and Electrical Engineering Publications
 
Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 4 times in Thomson Reuters Web of Science Article | Citations
Scopus Citation Count Cited 3 times in Scopus Article | Citations
Google Scholar Search Google Scholar
Created: Tue, 10 Feb 2015, 00:26:47 EST by System User on behalf of School of Information Technol and Elec Engineering