Paper
Wednesday, July 11, 2007
This presentation is part of : Creating tools and building evidence to evaluate the Outcome-Present State-Test (OPT) Model of clinical reasoning
Quantitatively evaluating students' clinical reasoning with the Outcome-Present State-Test Model
Donald D. Kautz, PhD, RN, CNRN, CRRN-A1, RuthAnne Kuiper, RN, PhD2, Robin Bartlett, PhD, RN, BC3, Daniel J. Pesut, PhD, APRN, BC, FAAN4, and Raymond Buck, PhD3. (1) Adult Health, School of Nursing, University of North Carolina Greensboro, Greensboro, NC, USA, (2) School of Nursing, University of North Carolina Wilmington, Wilmington, NC, USA, (3) School of Nursing, University of North Carolina at Greensboro, Greensboro, NC, USA, (4) Graduate Programs, Indiana University School of Nursing, Indianapolis, IN, USA

Assisting students to develop clinical competence requires that faculty must be able to evaluate students’ ability to reason through clinical problems.  This session will report the development of a tool to quantify students’ ability to complete Clinical Reasoning Webs and Outcome-Present State-Test (OPT) Model worksheets after caring for patients in a variety of clinical settings.  The tool is in its fourth version and continues to evolve as it is used for research and educational practice. The tool contains 22 areas for faculty to rate students’ work with 76 possible total points. The tool was designed to detect variability from week to week, and to differentiate between good and poor work.

The current version of the tool was used to conduct a secondary analysis of 510 clinical reasoning webs and OPT Model worksheets completed by 46 students in an adult health (medical-surgical) clinical practicum.  Students completed the OPT Model work sheets following clinical each week, for 8-10 weeks.  The tool was expanded from earlier versions in response to a lack in variation seen between students and over time, and low inter-rater reliability scores.

            The primary benefit of using a rating tool for clinical assignments is that faculty make their expectations explicit and show how students’ work will be evaluated. Consistent use of the tool is promoted when faculty commit to training and group review to reduce threats to internal validity due to faculty fatigue, and bias towards or against particular students.  Ever since faculty have been requiring students to complete nursing care plans for clinical practicums, nursing students have justifiably complained that faculty evaluation of their work was biased and variable from week to week. Reliable and valid measures of students’ written clinical work would enable faculty to track progress and identify strengths and weaknesses for professional growth and remediation.