Development and Alpha Testing the Self-Management to Prevent (STOP) Stroke Tool

Tuesday, 14 July 2009: 1:45 PM

Jane A. Anderson, PhD
Neurology Care Line, Michael E. DeBakey VA Medical Center, Houston, TX
Pamela Willson, RN, PhD, FNP, BC
Elsevier Review and Testing, Elsevier Publishing, Houston, TX

Learning Objective 1: apply knowledge management theory to guide development and utilization of clinical decision support tools for the translation of evidence-based practice.

Learning Objective 2: describe testing methodologies that incorporate end-users’ perspectives to develop useable and useful clinical decision support systems.

The Self-management TO Prevent (STOP) Stroke Tool is a clinical decision support system that guides nurse practitioners and other clinicians in evidence-based secondary stroke prevention. The tool prompts clinicians on secondary stroke prevention clinical practice guidelines (CPGs), and links to printable patient education materials and self-management action plans.

Purpose: The purpose of this study was to complete Alpha testing on the tool. Multiple iterative testing cycles with end-user evaluation were completed. Testing was designed to validate the functionality of each component of the tool and to determine overall usability among a sample of multidisciplinary clinicians.

Methods: A before/after design and descriptive methods were utilized. The predominant functionality feature tested was automated prompting and documentation of secondary stroke prevention CPGs in the electronic medical record. To test this functionality, provider documentation of CPGs was compared among a sample of multidisciplinary providers (N=15) using test case scenarios and two documentation systems, standard vs. the STOP Stroke Tool. Usability was evaluated with an investigator-developed questionnaire and one open-ended question. Nonparametric and descriptive statistics were used to analyze the data.

Results: The STOP Stroke Tool prompted a significant increase (p ≤ .05) in providers’ documentation for 6 of 11 CPGs (55%) as compared to baseline documentation while using the standard system. Out of a possible 56 points, usability was scored high (M = 48.9, SD = 6.8). No significant differences were found among provider types on total usability scores indicating a consensus for high usability of the tool across all provider types.

Conclusion: This research supports the effectiveness of providing CDSS within the electronic medical record to provide clinicians immediate access to evidence-based recommendations. Clinicians are more apt to use and apply the recommendations for evidence-based care when automatic decision support is incorporated as part of their established workflow.