Evaluating random error in clinician-administered surveys: theoretical considerations and clinical applications of interobserver reliability and agreement

Bennett, Rebecca J., Taljaard, Dunay S., Olaithe, Michelle, Brennan-Jones, Chris and Eikelboom, Robert H. (2017) Evaluating random error in clinician-administered surveys: theoretical considerations and clinical applications of interobserver reliability and agreement. American Journal of Audiology, 26 3: 191-201. doi:10.1044/2017_AJA-16-0100


Author Bennett, Rebecca J.
Taljaard, Dunay S.
Olaithe, Michelle
Brennan-Jones, Chris
Eikelboom, Robert H.
Title Evaluating random error in clinician-administered surveys: theoretical considerations and clinical applications of interobserver reliability and agreement
Journal name American Journal of Audiology   Check publisher's open access policy
ISSN 1059-0889
1558-9137
Publication date 2017-09-18
Sub-type Article (original research)
DOI 10.1044/2017_AJA-16-0100
Open Access Status Not yet assessed
Volume 26
Issue 3
Start page 191
End page 201
Total pages 11
Place of publication Rockville, MD United States
Publisher American Speech - Language - Hearing Association
Language eng
Formatted abstract
Purpose: The purpose of this study is to raise awareness of interobserver concordance and the differences between interobserver reliability and agreement when evaluating the responsiveness of a clinician-administered survey and, specifically, to demonstrate the clinical implications of data types (nominal/categorical, ordinal, interval, or ratio) and statistical index selection (for example, Cohen's kappa, Krippendorff's alpha, or interclass correlation).

Methods: In this prospective cohort study, 3 clinical audiologists, who were masked to each other's scores, administered the Practical Hearing Aid Skills Test–Revised to 18 adult owners of hearing aids. Interobserver concordance was examined using a range of reliability and agreement statistical indices.

Results: The importance of selecting statistical measures of concordance was demonstrated with a worked example, wherein the level of interobserver concordance achieved varied from “no agreement” to “almost perfect agreement” depending on data types and statistical index selected.

Conclusions: This study demonstrates that the methodology used to evaluate survey score concordance can influence the statistical results obtained and thus affect clinical interpretations.
Q-Index Code C1
Q-Index Status Provisional Code
Institutional Status Non-UQ

Document type: Journal Article
Sub-type: Article (original research)
Collection: Faculty of Medicine
 
Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in Thomson Reuters Web of Science Article
Scopus Citation Count Cited 0 times in Scopus Article
Google Scholar Search Google Scholar
Created: Fri, 13 Apr 2018, 19:39:09 EST