The design and application of rubrics to assess signed language interpreting performance

Wang, Jihong, Napier, Jemina, Goswell, Della and Carmichael, Andy (2015) The design and application of rubrics to assess signed language interpreting performance. The Interpreter and Translator Trainer, 9 1: 83-103. doi:10.1080/1750399X.2015.1009261


Author Wang, Jihong
Napier, Jemina
Goswell, Della
Carmichael, Andy
Title The design and application of rubrics to assess signed language interpreting performance
Journal name The Interpreter and Translator Trainer   Check publisher's open access policy
ISSN 1750-399X
1757-0417
Publication date 2015-02-23
Sub-type Article (original research)
DOI 10.1080/1750399X.2015.1009261
Open Access Status
Volume 9
Issue 1
Start page 83
End page 103
Total pages 21
Place of publication Abingdon, Oxfordshire, United Kingdom
Publisher Taylor & Francis
Collection year 2016
Language eng
Formatted abstract
This article explores the development and application of rubrics to assess an experimental corpus of Auslan (Australian Sign Language)/English simultaneous interpreting performances in both language directions. Two rubrics were used, each comprising four main assessment criteria (accuracy, target text features, delivery features and processing skills). Three external assessors – two interpreter educators and one interpreting practitioner – independently rated the interpreting performances. Results reveal marked variability between the raters: inter-rater reliability between the two interpreter educators was higher than between each interpreter educator and the interpreting practitioner. Results also show that inter-rater reliability regarding Auslan-to-English simultaneous interpreting performance was higher than for English-to-Auslan simultaneous interpreting performance. This finding suggests greater challenges in evaluating interpreting performance from a spoken language into a signed language than vice versa. The raters’ testing and assessment experience, their scoring techniques and the rating process itself may account for the differences in their scores. Further, results suggest that assessment of interpreting performance inevitably involves some degree of uncertainty and subjective judgment.
Keyword Signed language interpreting
Assessment rubrics
Raters
Scoring process and techniques
Inter-rater reliability
Q-Index Code C1
Q-Index Status Provisional Code
Institutional Status Non-UQ

Document type: Journal Article
Sub-type: Article (original research)
Collections: Non HERDC
School of Languages and Cultures Publications
 
Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 2 times in Thomson Reuters Web of Science Article | Citations
Scopus Citation Count Cited 2 times in Scopus Article | Citations
Google Scholar Search Google Scholar
Created: Mon, 13 Jul 2015, 15:14:31 EST by Ms Katrina Hume on behalf of School of Languages and Cultures