Purpose of the study/project. To evaluate reliability of an undergraduate universal clinical performance grading rubric.
Literature Review: Clinical instructors expect to see student performance improve; however, attaching a grade to performance remains subjective (Amicucci, 2012; Isaacson & Stacey, 2009; Oerman, Yarbrough, Saewert, Ard & Charasika, 2009). Standardized documented evaluation methods can improve objectivity, define expected competencies, are easier to defend, avoid litigation, and empower the instructor, by shifting the paradigm from highlighting student errors to an educational perspective (Bofinger & Rizk, 2006; DeBrew & Lewallen, 2014; Tanicala, Scheffer & Roberts, 2011). Evaluating performance based on a program-wide grading rubric to measure clinically specific criterion-based learning outcomes appears to be a novel approach in clinical nursing education (Bourbonais, Langford & Giannantonio, 2008; Gantt, 2010; Heaslip & Scammel, 2012; Lasater, 2007).
Sample Description/Population. Convenience sample of 58 first semester clinical undergraduate baccalaureate nursing students.
Setting. Seven clinical instructors in nine clinical sections.
Method/Design & Procedure. A universal grading rubric with nine performance outcomes was tested retrospectively for measures of reliability and consistency. Summative and formative measures were compared. Written assignments were compared to clinical performance. Clinical instructors responded to questions related to the accuracy of the calculated letter grade with the use of the rubric.
Results/Outcomes. Significance was found between midterm (M =.89) and final performance evaluations (M = .94) (t(57) = -15.896, p <.001 (two tailed) showing an increase in final performance. Using independent samples, no correlation was found between final written work and performance evaluations (r(56) = .164, p =>.05 and a significant difference was noted between written work (M =.973) and performance evaluations (M =.915) t(114) = 14.536, p = <.001. Cronbach alpha scores for all nine performance outcomes equaled .917, demonstrating excellent internal consistency. All clinical instructors agreed that the results accurately measured student performance.
Conclusions/Implications. Use of the grading rubric was effective in measuring student clinical performance and provided an objective grade calculation. Student’s written work consistently scored higher than clinical performance. This grading rubric, when used in an undergraduate clinical experience, has the potential to increase reliability in grading clinical performance.