Unfolding the phenomenon of interrater agreement: a multicomponent approach for in-depth examination was proposed

Slaug, Björn, Schilling, Oliver, Helle, Tina, Iwarsson, Susanne, Carlsson, Gunilla and Brandt, Ase (2012) Unfolding the phenomenon of interrater agreement: a multicomponent approach for in-depth examination was proposed. Journal of Clinical Epidemiology, 65 9: 1016-1025. doi:10.1016/j.jclinepi.2012.02.016


Author Slaug, Björn
Schilling, Oliver
Helle, Tina
Iwarsson, Susanne
Carlsson, Gunilla
Brandt, Ase
Title Unfolding the phenomenon of interrater agreement: a multicomponent approach for in-depth examination was proposed
Journal name Journal of Clinical Epidemiology   Check publisher's open access policy
ISSN 0895-4356
1878-5921
Publication date 2012-09
Sub-type Article (original research)
DOI 10.1016/j.jclinepi.2012.02.016
Open Access Status
Volume 65
Issue 9
Start page 1016
End page 1025
Total pages 10
Place of publication Philadelphia, PA, United States
Publisher Elsevier
Language eng
Formatted abstract
Objective: The overall objective was to unfold the phenomenon of interrater agreement: to identify potential sources of variation in agreement data and to explore how they can be statistically accounted for. The ultimate aim was to propose recommendations for in-depth examination of agreement to improve the reliability of assessment instruments.

Study Design and Setting: Using a sample where 10 rater pairs had assessed the presence/absence of 188 environmental barriers by a systematic rating form, a raters × items data set was generated (N = 1,880). In addition to common agreement indices, relative shares of agreement variation were calculated. Multilevel regression analysis was carried out, using rater and item characteristics as predictors of agreement variation.

Results: Following a conceptual decomposition, the agreement variation was statistically disentangled into relative shares. The raters accounted for 6-11%, the items for 32-33%, and the residual for 57-60% of the variation. Multilevel regression analysis showed barrier prevalence and raters' familiarity with using standardized instruments to have the strongest impact on agreement.

Conclusion: Supported by a conceptual analysis, we propose an approach of in-depth examination of agreement variation, as a strategy for increasing the level of interrater agreement. By identifying and limiting the most important sources of disagreement, instrument reliability can be improved ultimately.  
Keyword Agreement
Interrater
Kappa
Methodology
Recommendations
Reliability
Q-Index Code C1
Q-Index Status Provisional Code
Institutional Status Non-UQ

Document type: Journal Article
Sub-type: Article (original research)
Collection: School of Nursing, Midwifery and Social Work Publications
 
Versions
Version Filter Type
Citation counts: Scopus Citation Count Cited 2 times in Scopus Article | Citations
Google Scholar Search Google Scholar
Created: Wed, 02 Jul 2014, 13:56:40 EST by Vicki Percival on behalf of School of Nursing, Midwifery and Social Work