Using the Delphi Technique to Develop a Peer-Review Debriefing Instrument for Simulation Healthcare Education

Sunday, 8 November 2015: 11:00 AM

Jennifer Saylor, PhD, MSN, BSN, RN, APRN-BC
School of Nursing, University of Delaware, Newark, DE, USA

BACKGROUND/SIGNIFICANCE

Creating an educational curriculum to prepare future healthcare providers with the essential knowledge, psychomotor skills, communication skills and critical thinking skills is vital and challenging for faculty. Simulation has been shown to be an effective method for learning, practicing or demonstrating a variety of skills needed in specific clinical environments (Herge et al, 2013). It provides opportunities for learners to practice cognitive, affective and psychomotor skills in life-like situations without risk to patients. Also, simulation in healthcare is performed in both pre and post licensure in academic and clinical settings. The learners span across the healthcare professions, including nurses, physicians, physical therapists, occupational and speech therapists, or any combination to create interprofessional simulations. 

The components of a patient simulation include a pre-briefing, this simulation experience itself, and a post-simulation analysis (debriefing). Debriefing is an essential part of the simulation experience because this is where most of the learning occurs (Arafah, Hansen & Nichols, 2010). Debriefing after a simulation is an intentional process designed to provide awareness and insight, as well as to strengthen and transfer learning via an experiential learning exercise (Arafeh, Hansen, & Nichols, 2010; Miller, Riley, Davis, & Hansen, 2008; Morgan, et al., 2009). Through this feedback exchange with a facilitator, the learners have the opportunity to reflect on their decision making, critical thinking, and interprofessional communication, through self-analysis and a peer evaluation.

The literature identifies strategies for developing the simulation experience and evaluating the learners (Dreifuerst, 2009; Fanning & Gaba, 2010). While there exists a body of literature identifying effective strategies to facilitate group discussions in the classroom as well as assessment tools to evaluate faculty effectiveness, these concepts have not been applied to assess faculty effectiveness in facilitating debriefing after a patient clinical simulation. Educators often struggle to transition from instructor-centric education to learner-centric facilitation in the debriefing process.

 PURPOSE

The purpose of this pilot study was to develop a peer-debriefing evaluation instrument to assess the effectiveness of a facilitator during simulation education using the Delphi Technique. 

METHODS

The peer-debriefing evaluation instrument was developed using the Delphi technique. This technique is a useful research methodology in achieving consensus on a particular issue where there is a lack of empirical evidence (Asselin & Harper, 2014; Falzarano, 2013). It has been applied in diverse projects including program planning, needs assessment, policy determination and resource utilization and validation of assessment tools (Stefanovich, Williams, McKee, Hagemann & Carnahan, 2012). This technique is a cost efficient method of generating ideas and facilitating consensus among experts in the field who do not meet face to face and may be geographically distant (Asselin & Harper, 2014). Three rounds of review and feedback by content experts was necessary to achieve the desired level of consensus.

In preparations of establishing inter-rater reliability among five consented experts, the researchers developed and recorded three debriefing simulations using vignettes to illustrate different performance levels of a facilitator’s debriefing proficiency. Each debriefing session was the from same simulation, but competency of the facilitator changed. The five experts received a half-day education session. To demonstrate the phase 2 process, each expert was provided one completed pre-assessment form and three post-evaluation forms, one for each of debriefing vignette. After each debriefing session, experts promptly completed the post evaluation form independently in silence. After all three videos were completed; the researchers reviewed each video and provided the ‘real score’ and its rationale.

Sample: Purposive sampling techniques were used to elicit participation for the expert panel that were identified based on authorship of literature or nomination from established clinical simulation center directors. Of the 15 consented, 11 responded and agreed to complete the study.  The expert panel (n=11) and experts (n=5) for phase 2 represented various clinical expertise, including nursing, radiation oncology, medicine, occupational therapy, physical therapy, academia related healthcare fields.

Instrument development and administration: A thorough search of contemporary literature and the experiences of the researchers provided the framework for development of items and rating scale for the initial evaluative instrument of faculty effectiveness in conducting a debriefing post clinical simulations activity. The participants completed an Internet-based survey (Qualtrics) over a 5 month period with each of the three rounds lasting approximately 2 hours each, for a total of 6 hours.

The respondents were asked to evaluate 2 parts of the peer-debriefing evaluation instrument. Part one of the respondents’ survey, “Pre-assessment of the simulation experience”, is a self-assessment of debriefing skills that the debriefer would completed. Part two of the respondents’ survey, “Debriefing Evaluation (Self and Peer Assessment),” is an assessment of the various aspects of conducting a simulation, which was categorized in eight areas including  structure and organization of the debriefing, verbal and non-verbal communication, recapping the simulation experience, and reflecting on action. Using a 4-point scale, the debriefing experience was evaluated (1-4) based on the percentage completed for each area. This instrument would be completed by both the evaluator and debriefer and then used to guide the peer evaluation process. Respondents were asked to rate statements on the survey for clarity and understandability using a 4-point Likert scale (vague to clear) and were provided ample space to suggest additions, deletions, or changes to survey elements.

Data analysis: Quantitative and qualitative analysis methods were used in each phase of the research study and items that do not reach an acceptable level of an 80% consensus among panel experts were omitted. Content analysis of open ended responses was used to refine all components of the instrument. Interclass correlation coefficients (ICC) were calculated to find the inter-rater reliability. ICC were calculated to find the inter-rater reliability.

Results: A three-round Delphi process was used to revise the instrument developed by the research team. Feedback from Delphi panelists was evaluated after each round and the instrument was updated to reflect the panel’s suggestions. Upon completion of Round 1 (n = 7) changes were made to the assessment tool with respect to structure of the tool and language within the questions and response scales. Specifically the following edits were made to standardize language in the instrument to debriefer rather than facilitator, language and use of terms was clarified and the addition of exemplar behaviors in post-simulation questions was completed, as well as additional explanation on how the post-simulation tool would be used. Two changes were made to the response scales: 1) the definitions of the “Not Familiar” to “Very Familiar” scale were revised; 2) the “Instructor-Centric” through “Learner-Centric” spectrum was decreased from four to three options and the definitions for the new categories were revised.

In round 2, the Delphi panel (n = 11) operationalized the definitions for high-fidelity and low-fidelity simulations.  In this final round, consensus at greater than 80% was achieved for both structural and content elements of the assessment tool. Nine participants completed this round; eight of the nine participants had completed Round 2 as well. The inter-rater reliability for the average measures was very strong, ICC = .973, and for the single measure was strong, ICC = .818. As we have achieved 80% agreement that was established a priori, we determined that the assessment tool was ready for pilot testing in the educational setting.

CONCLUSION

The key to all successful simulation/debriefing experiences is an effective debriefing facilitator. A skilled facilitator guides and assists the learner in transferring their experience into clinical practice. A peer-review tool may improve the skills of facilitators, thus the debriefing process yielding more proficient healthcare professionals. Faculty can use triangulation of their intended performance/outcomes with this instrument to demonstrate effectiveness and/or excellence.