Paper
Wednesday, July 11, 2007
This presentation is part of : Creating tools and building evidence to evaluate the Outcome-Present State-Test (OPT) Model of clinical reasoning
Statistical Comparisons of Clinical Reasoning using the Outcome-Present State-Test Model
Raymond Buck, PhD, School of Nursing, University of North Carolina at Greensboro, Greensboro, NC, USA

Assisting students to develop clinical competence requires that faculty must be able to evaluate students’ ability to reason through clinical problems.  This session will report the statistical evaluation of Outcome-Present State-Test (OPT) Model worksheets (version 3) that have been completed by students in an adult health clinical practicum. This version of the tool contained 23 areas for faculty to rate students’ work with 74 possible total points; 14 areas were counts up to pre-defined maxima for numbers of specific attributes from the corresponding Clinical Reasoning Webs and nursing care plans, and the remaining 9 areas were simple binary (1/0) responses for the presence of other desired features.

Statistical tools such as growth curve modeling and time-series analyses are techniques that are useful to examine learning over time, and could potentially be used to assess the desired attributes of detecting variability from week to week and differentiating between good and poor work.  However not all components of the OPT total score are changing uniformly, with many reaching their maximum possible contribution on the first or second worksheet evaluated.  Further the mixture of limited-category ordinal variables from different measurement scales presents challenges for these continuous data techniques, and alternate strategies are presented.

Explorations of the individual component contributions via factor analysis to assist in the refinement of the OPT model in this clinical setting is also presented.  Again variable limitations make application of this and many other common multivariable methods problematic.These issues coupled with the lack of a common standard for each week’s assessment make comparisons between students using the tool a challenge.  However descriptive and less-complicated analyses have proven useful.