Can Rasch Analysis Enhance the Abstract Ranking Process in Scientific Conferences? Issues of Interrater Variability and Abstract Rating Burden

Scanlan, Justin Newton, Lannin, Natasha A and Hoffmann, Tammy (2015) Can Rasch Analysis Enhance the Abstract Ranking Process in Scientific Conferences? Issues of Interrater Variability and Abstract Rating Burden. Journal of Continuing Education in the Health Professions, 35 1: 18-26. doi:10.1002/chp.21263


Author Scanlan, Justin Newton
Lannin, Natasha A
Hoffmann, Tammy
Title Can Rasch Analysis Enhance the Abstract Ranking Process in Scientific Conferences? Issues of Interrater Variability and Abstract Rating Burden
Journal name Journal of Continuing Education in the Health Professions   Check publisher's open access policy
ISSN 1554-558X
0894-1912
Publication date 2015-03-20
Year available 2015
Sub-type Article (original research)
DOI 10.1002/chp.21263
Volume 35
Issue 1
Start page 18
End page 26
Total pages 9
Place of publication Hoboken, United States
Publisher John Wiley and Sons Inc
Language eng
Formatted abstract
Introduction

Abstract ranking processes for scientific conferences are essential but controversial. This study examined the validity of a structured abstract rating instrument, evaluated interrater variability, and modeled the impact of interrater variability on abstract ranking decisions. Additionally, we examined whether a more efficient rating process (abstracts rated by two rather than three raters) supported valid abstract rankings.

Methods

Data were 4016 sets of abstract ratings from the 2011 and 2013 national scientific conferences for a health discipline. Many-faceted Rasch analysis procedures were used to examine validity of the abstract rating instrument and to identify and adjust for the presence of interrater variability. The two-rater simulation was created by the deletion of one set of ratings for each abstract in the 2013 data set.

Results

The abstract rating instrument demonstrated sound measurement properties. Although each rater applied the rating criteria consistently (intrarater reliability), there was significant variability between raters. Adjusting for interrater variability changed the final presentation format for approximately 10–20% of abstracts. The two-rater simulation demonstrated that abstract rankings derived through this process were valid, although the impact of interrater variability was more substantial.

Discussion

Interrater variability exerts a small but important influence on overall abstract acceptance outcome. The use of many-faceted Rasch analysis allows for this variability to be adjusted for. Additionally, Rasch processes allow for more efficient abstract ranking by reducing the need for multiple raters.
Keyword Interrater reliability
Many-facet Rasch model (MFRM)
Peer review
Profession-other
Psychometrics/instrument design and testing
Scientific meeting
Statistical analysis
Q-Index Code C1
Q-Index Status Confirmed Code
Institutional Status UQ

Document type: Journal Article
Sub-type: Article (original research)
Collections: Official 2016 Collection
School of Health and Rehabilitation Sciences Publications
 
Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 2 times in Thomson Reuters Web of Science Article | Citations
Scopus Citation Count Cited 3 times in Scopus Article | Citations
Google Scholar Search Google Scholar
Created: Tue, 31 Mar 2015, 11:30:36 EST by System User on behalf of Scholarly Communication and Digitisation Service