Jeffries’ (2005, 2007) Nursing Education Simulation Framework was developed to provide guidance to nurses in the design, implementation, and evaluation of simulation used in nursing education. This framework includes five major components: teacher factors, student factors, educational practices, simulation design characteristics, and outcomes. According to this framework, a properly designed simulation is a teaching strategy used in the teaching-learning process and affects the five described outcomes of satisfaction, self-confidence, critical thinking, knowledge, and performance (Jeffries, 2005). Extensive research has validated the use of simulation as an educational strategy for nursing students (Cant & Cooper, 2010) and newly graduated nurses (Ackermann, Kenny, & Walker, 2007; Olejniczak et al., 2010). Increases in satisfaction, self-confidence, critical thinking and knowledge have been reported in nursing students following a simulation experience (Birch et al., 2007; Bremner, Aduddell, Bennett, & VanGeest, 2006; Gibbons et al., 2002; O’Donnell et al., 2011; Swenty & Eggleston, 2011). Changes in performance following simulation for nursing students are less abundant in published reports (Alinier, Hunt, & Gordon, 2004; Grady et al., 2008; Pauly-O’Neill, 2009) with even fewer addressing performance in practicing nurses (Jones, Cason, & Mancini, 2002).
Using the Nursing Education Simulation Framework for evaluation, simulation becomes an evaluation tool instead of a teaching strategy. Teacher and student factors interact with education practices to produce the outcome of performance. For this study, a summative evaluation was performed – evaluating the learner’s attainment of a goal, i.e. acceptable performance of patient care in a simulated critical care environment, to evaluate the nurse’s readiness for providing care in a critical care environment.
Methods: A descriptive pilot study was conducted to determine the feasibility of simulation scenarios to evaluate performance. Scenarios addressing the most common types of patients and procedures seen in the critical care unit and an evaluation tool were created and tested with experienced critical care nurses and nurses with no critical care experience. Participants (n=7) were recruited from all registered nurses in a military treatment facility. Three independent raters timed and evaluated the participants as they completed three simulated patient care scenarios. Inter-rater reliability was determined by calculating Cohen's alpha. Difference in mean overall score and time to complete the scenarios was identified by t-tests.
Results: Inter-rater reliability for the evaluation tool was excellent (Cronbach’s alpha = 0.95). Mean overall participant scores, grouped by self-report of critical care experience, ranged from 33.67 to 54.67 (possible score range: 11.00 to 55.00). A split in the overall mean scores was identified between those with and those without critical care experience. Mean time to complete the scenarios was 55.29 minutes (range: 38 to 82 minutes). There were no statistically significant differences in overall mean score (t(5)=-0.51, p=0.63), nor time to complete the scenarios (t(5)=1.55, p=0.18) between the groups.
Conclusion: Simulation scenarios to evaluate new nurses are feasible in the military treatment facility. Although not time consuming for the participant, preparation and evaluation of the scenarios is personnel- and time-intensive. Although no statistically significant results were found, the split in overall mean scores may indicate a method to determine proficiency in critical care nursing. Findings from this study support the use of the created performance evaluation tool with simulation scenarios. Replication of this study with a larger, more diverse sample is recommended to further validate the evaluation tool and these findings. Successful results can be transferred to other departments within the medical center and to performance validation prior to deployment.