Paper
Friday, July 13, 2007
This presentation is part of : Instrument Development and Measurement Models
Assessing Diagnostic Reasoning with Standardized Patients
Christine Pintz, PhD, RNC, APRN, BC, Department of Nursing Education, George Washington University, Washington, DC, USA
Learning Objective #1: To describe the development of and the psychometric assessment of the Diagnostic Reasoning Assessment (DRA).
Learning Objective #2: To discuss the evaluation of nurse practitioner students’ expertise with the diagnostic reasoning process within the standardized patient environment

Aim: The purpose of this study was to establish support for the reliability and validity of the Diagnostic Reasoning Assessment (DRA), an assessment instrument used within the standardized patient environment. The DRA assesses diagnostic reasoning skill in nurse practitioner (NP) students. 

Background: Very few tools exist to evaluate students in the standardized patient environment. To ensure clinical competence in nurse practitioner students, methods must be developed to assess clinical performance. The DRA is a formative evaluation that provides students with feedback on performance. The DRA helps faculty gain insight into a student's diagnostic reasoning ability and to foster improvement in its development.

Methods: NP students were evaluated by two NP faculty and a standardized patient using the DRA. Content validity was assessed by content experts and the instrument was revised based on their feedback. Generalizability studies were performed for the DRA, SOAP Note Evaluation Tool (SOAP) and Script Concordance Test (SCT) – an instrument that measures diagnostic reasoning, to determine variance components. Construct validity was examined by comparing correlations between the DRA and the SOAP and the SCT.

Findings: Generalizability analysis followed a two facet, person by rater by item design. The generalizability coefficient was 0.81 for all raters. The largest source of variance for the DRA was the person component accounting for 44% of the variance, followed by the residual with 24% and the person by rater interaction at 23%. The relationship between the DRA and SCT for the faculty ratings, r (49) = .32, p= .02 The relationship between the DRA and SOAP (with all raters) was significant, r (47) = .39, p = .006, and was significant with faculty raters, r (47) = .44, p = .002.

Conclusion: This study offers support for the psychometric properties of the Diagnostic Reasoning Assessment.