Feasibility of Simulation in Orientation: A Pilot Study

Monday, 9 November 2015

Karen M. O'Connell, PhD, MSN, BSN, ADN, RN, CEN, NEA-BC
Wright Patterson Medical Center, WPAFB, OH, USA
Sherrill J. Smith, PhD, RN, CNL, CNE
College of Nursing and Health, Wright State University, Dayton, OH, USA

Background:  Active duty military nurses transfer to new duty stations every three or four years.  With each transfer, the nurse enters into a typical orientation period of six weeks.  During this orientation, new nurses receive organizational and unit information while preceptors evaluate patient care skills.  Most units utilize an orientation checklist and bedside skill proficiency is validated by direct observation.  Decreased patient acuity and lengths of stay have resulted in limited availability of common patient types or procedures, especially in critical care units.  Simulation allows observation of nursing skills in a shorter time than direct observation at the bedside; however, literature does not identify the best evaluation tool.  The purpose of this study was to determine the feasibility of high-fidelity simulation in unit orientation of newly assigned nursing personnel, more specifically nurses assigned to the critical care unit.  The research question examined is:  What is the feasibility of the use of a simulated clinical shift as a part of the critical care unit orientation to assess the performance of incoming nursing personnel?  The specific aims were to:    1) Determine appropriate simulation scenarios to address the high-risk, low-volume critical care situations and the most common patient types to be included in the performance assessment of newly assigned nurses; 2) Develop a performance evaluation tool based on the current critical care orientation checklist for use in evaluating the new nurse’s performance using the simulation scenarios; 3) Determine the psychometric properties of the performance evaluation tool using simulation scenarios, and 4)  Determine the usefulness of the simulation scenarios and performance evaluation tool to identify those nurses who do not meet performance standards.

Jeffries’ (2005, 2007) Nursing Education Simulation Framework was developed to provide guidance to nurses in the design, implementation, and evaluation of simulation used in nursing education. This framework includes five major components:  teacher factors, student factors, educational practices, simulation design characteristics, and outcomes.  According to this framework, a properly designed simulation is a teaching strategy used in the teaching-learning process and affects the five described outcomes of satisfaction, self-confidence, critical thinking, knowledge, and performance (Jeffries, 2005). Extensive research has validated the use of simulation as an educational strategy for nursing students (Cant & Cooper, 2010) and newly graduated nurses (Ackermann, Kenny, & Walker, 2007; Olejniczak et al., 2010).  Increases in satisfaction, self-confidence, critical thinking and knowledge have been reported in nursing students following a simulation experience (Birch et al., 2007; Bremner, Aduddell, Bennett, & VanGeest, 2006; Gibbons et al., 2002; O’Donnell et al., 2011; Swenty & Eggleston, 2011).  Changes in performance following simulation for nursing students are less abundant in published reports (Alinier, Hunt, & Gordon, 2004; Grady et al., 2008; Pauly-O’Neill, 2009) with even fewer addressing performance in practicing nurses (Jones, Cason, & Mancini, 2002). 
            Using the Nursing Education Simulation Framework for evaluation, simulation becomes an evaluation tool instead of a teaching strategy.  Teacher and student factors interact with education practices to produce the outcome of performance.  For this study, a summative evaluation was performed – evaluating the learner’s attainment of a goal, i.e. acceptable performance of patient care in a simulated critical care environment, to evaluate the nurse’s readiness for providing care in a critical care environment.  
Methods:  A descriptive pilot study was conducted to determine the feasibility of simulation scenarios to evaluate performance.  Scenarios addressing the most common types of patients and procedures seen in the critical care unit and an evaluation tool were created and tested with experienced critical care nurses and nurses with no critical care experience.  Participants (n=7) were recruited from all registered nurses in a military treatment facility.  Three independent raters timed and evaluated the participants as they completed three simulated patient care scenarios.  Inter-rater reliability was determined by calculating Cohen's alpha.  Difference in mean overall score and time to complete the scenarios was identified by t-tests.
Results:  Inter-rater reliability for the evaluation tool was excellent (Cronbach’s alpha = 0.95).  Mean overall participant scores, grouped by self-report of critical care experience, ranged from 33.67 to 54.67 (possible score range:  11.00 to 55.00).  A split in the overall mean scores was identified between those with and those without critical care experience.  Mean time to complete the scenarios was 55.29 minutes (range:  38 to 82 minutes).  There were no statistically significant differences in overall mean score (t(5)=-0.51, p=0.63), nor time to complete the scenarios (t(5)=1.55, p=0.18) between the groups.   
Conclusion:  Simulation scenarios to evaluate new nurses are feasible in the military treatment facility.  Although not time consuming for the participant, preparation and evaluation of the scenarios is personnel- and time-intensive.  Although no statistically significant results were found, the split in overall mean scores may indicate a method to determine proficiency in critical care nursing.  Findings from this study support the use of the created performance evaluation tool with simulation scenarios.  Replication of this study with a larger, more diverse sample is recommended to further validate the evaluation tool and these findings. Successful results can be transferred to other departments within the medical center and to performance validation prior to deployment.