Who Should Do DASH Evaluations for Simulation Facilitators?

Sunday, 29 October 2017: 3:25 PM

I. Marlene Summers, MSN/ED
Intermountain Simulation, Intermountain Healthcare Simulation Team, Salt Lake City, UT, USA

Intermountain Healthcare is one of the largest health care systems in Utah and souther Idaho. It began developing and growing its simulation program in 2011. By 2015, it was ready to look at national accreditation requirements from the Society of Simulation in Healthcare to establish stability and credibility for the program. One of the requirements to achieve national accreditation from the Society for Simulation in Healthcare is a periodic evaluation of each simulation facilitator within an organization. The purpose of these evaluations is to ensure consistency of simulation debriefings to maximize learning and ensure psychological safety for all participants. Corporate simulation leaders who had completed the Comprehensive Instructor Workshop in Medical Simulation at the Center for Medical Simulation, in Cambridge, Massachusetts, had been introduced to the DASH Evaluation tool (Debriefing Assessment for Simulation in Healthcare). In the fall of 2015, regional simulation leaders attended a webinar hosted by the Center for Medical Simulation, to learn about the DASH evaluation and participate in the final decision about using this tool moving forward. Because the DASH Evaluation Tool has been validated, it was a relatively easy decision to use it in 2016. This raised the question - who should complete the evaluation - a self evaluation by each facilitator, a peer evaluation from a fellow facilitator, or the regional simulation leader. Not finding any recent published information to answer this question, the North Region of Intermountain Healthcare chose to make a comparison of average scores given on the 23 item long form of the DASH evaluation. The goal was to determine the differences and similarities between self evaluations, peer evaluations, and regional leader evaluations. This comparison was used to recommend the BEST way to complete these periodic evaluations for simulation facilitators throughout this corporation's simulation program. A scale of numbers from 1 through 7 is used on the DASH evaluation's six Elements with a range of two to five behaviors within each Element. A rating of 1 = Extremely Ineffective/Detrimental; 2 = Consistently Ineffective/Very Poor; 3 = Mostly Ineffective/Poor; 4 = Somewhat Effective/Average; 5 = Mostly Effective/Good; 6 = Consistently Effective/Very Good; 7 = Extremely Effective/Very Good. An average of 57 facilitator evaluations from the North Region revealed considerable differences between the three categories of evaluators. These differences raised even more questions about how to proceed with future evaluations.