An Investigation of the Integration of Technology to Enhance Consistency in Grading Clinical Skills

Sunday, 30 July 2017

Sabine S. S. Dunbar, RN
College of Graduate Nursing, Western University of Health Sciences, Pomona, CA, USA

Background: A vital aspect of health professional education is the evaluation of clinical competence (Snodgrass, Ashby, Onyango, Russell, & Rivet, 2014). Moreover, nurse educators ensure that students become safe and competent practitioners (Hodson Carlton, 2012). The development of competence begins during pre-licensure nursing education (Vernon , Chiarella, & Papps, 2011) hence, students utilize learning opportunities to become proficient in clinical skills prior to practicing these on real patients. In light of the current focus on quality and safety in health care, there is a need to accurately evaluate performance to ensure safe clinical practice (Zazadny & Bull, 2015). In nursing education, clinical skills laboratories serve the purpose of evaluating student performance of clinical skills (Houghton, Casey, Shaw, & Murphy, 2012), which are an important component of nursing competence. Clinical skills commonly taught in prelicensure nursing programs include physical assessment skills, with summative evaluation taking place through student demonstration of a physical examination on a simulated patient. Nurse educators evaluate student performance via direct observation (Bourke & Ihrke, 2012), which may predispose inconsistency in grading (Donaldson & Gray, 2012; Zazadny & Bull, 2015). Students report such inconsistency in a baccalaureate, prelicensure health assessment course as educators grade summative examinations. Professional education programs utilize audio-visual technology for student learning and evaluation in health professional education programs. Audio visual recording may potentially enhance consistency among faculty evaluating clinical skills.

Purpose: This project investigated consistency among nursing faculty who grade summative physical examinations in a health assessment course through the utilization of audio-visual technology to compare live review and video review methods of grading.

Methods: A descriptive, comparative design was used to compare live grading to grading based on a video recording, and to measure reliability among six nurse educators teaching a health assessment course. Educators graded the performance of a physical examination by student and patient actors in a simulation laboratory. Grading was based on a pre-established checklist used in the nursing course for grading physical examinations performed by students. The physical examination was simultaneously recorded, allowing for measurement of inter-rater and intra-rater reliability, and a comparison of live and video grading of the examination by the same faculty approximately one month later.

Results: The study is ongoing until January, 2017, at which time data will be analyzed and results will be available.

Conclusion: Conclusions of this project will be dependent upon the results. However, it is anticipated that conclusions may be drawn related to consistency among faculty grading physical examinations, potentially leading to consideration of methods to improve consistency among multiple faculty members. Additionally, the results may provide evidence for the integration of audio-visual technology for grading clinical skills in nursing education.