Overcoming Challenges in Evaluating Active versus Observer Roles in Simulation-Based Education

Saturday, 21 April 2018: 10:30 AM

Brandon Kyle Johnson, MSN, RN, CHSE
School of Nursing, Indiana University, Indianapolis, IN, USA
Deanna L. Reising, PhD, RN, ACNS-BC, FNAP, ANEF
School of Nursing, Indiana University, Bloomington, IN, USA

Different student roles are frequently used in simulation-based education. A participant in the active role such as the primary nurse makes decisions and is involved in total patient care in the scenario. Alternatively, a participant in the passive role such as the observer, is frequently watching the simulation unfold without direct involvement in the decision-making. In the National Simulation Study, authors noted that students spend a large amount of time in the passive observation role (Hayden, Smiley, Alexander, Kardong-Edgren, & Jeffries, 2014).

Current research and practice within the nursing discipline has equated having students observe nursing practice with constructivist and experiential learning—the guiding frameworks that underpin simulation-based education (Jeffries, Rogers, & Adamson, 2016). However, there has been no research in nursing education to explore if these experiences for learners in observational roles do in fact support constructivist and experiential learning models. These theories include concepts of assimilation, accommodation, and active experimentation which would require two experiences, similar in nature, to allow for these to be evaluated (Kolb, 2015; Piaget & Cook, 1952). Therefore, the purpose of this pilot study was to establish that two simulation-based experiences, involving a clinical situation with respiratory distress, were contextually equivalent scenarios.

Research in nursing education is beginning to demonstrate that learning outcomes are not significantly different based on the student role in simulations (Fluharty et al., 2012; Kaplan et al., 2012; Livsey & Lavender-Stott, 2015; Rode et al., 2016; Scherer et al., 2016; Smith et al., 2013; Thidemann & Soderhamn, 2013; Zulkosky et al., 2016). However, only three studies examine more than one simulation (Livsey & Lavender-Stott, 2015; Rode et al., 2016; Scherer et al., 2016). Additionally, a significant amount of the existing studies failed to report psychometric analyses of knowledge assessments and/or behavioral instruments raising questions to stated outcomes (Kaplan et al., 2012; Smith et al., 2013; Thidemann & Soderhamn, 2013.) As nursing education simulation programs seek to increase simulation-based experiences, research is needed to demonstrate if one simulation-based experience is enough, despite role, for learners to assimilate and accommodate in subsequent scenarios. Assimilation and accommodation are suggested as the “ultimate goal in a practice profession and the essence of reflection” in simulation-based education (Dreifuerst, 2009, p. 111).

This study took place at a large multi-campus university baccalaureate prelicensure nursing program in the Southwest and involved 78 students and 10 faculty across two campuses. Data collection for the two simulations included four pre/post-tests that were designed to measure knowledge related to respiratory distress. Efforts to establish equivalency included constructing each exam with a similar number of questions assessing equal numbers of knowledge domains and NCLEX-RN competencies in alignment with the 2016 NCLEX-RN Test Plan. Content validity was established with an expert NCLEX-RN item writer. Item analyses were conducted to assess difficulty, discrimination, and instructional sensitivity (Haladyna, 2016; Waltz, Strickland, & Lenz, 2017) as well as internal consistency using the Küder-Richardson Formula 20. Additionally, data collection included a list of action-items that was developed to assess if each simulation required similar actions to address respiratory distress. Content validity was established with course faculty and a PhD prepared nurse with expertise in nursing education research. Interrater reliability was conducted through viewing recorded simulations.

Preliminary findings from this study include that psychometric testing of multiple-choice knowledge assessments can assist nursing education researchers not only in demonstrating the validity and reliability of measurements, but also in understanding how sensitive the simulation scenario and debriefing are to the content of the assessment. Although critiqued as a passive form of knowledge, multiple-choice tests are feasible to implement in simulation-based education (O’Donnel et al., 2014). Low validity and reliability scores were apparent; however, through the examination of additional discriminants including the Pre-Post Discrimination Index, the Individual Gain Index, and the Net Gain Index (Waltz, Strickland, & Lenz, 2017), the simulation markedly improved performance on individual questions indicating the sensitivity of the simulation.

The list of action items demonstrated moderate internal consistency using Cronbach’s alpha (Simulation 1= .692, Simulation 2=.795); however, faculty that participated and viewed recorded simulations reported issues in the facilitation of simulation-based education across instructors and campuses that confounded the ability to state that two simulation experiences were equivalent. This finding supports that multi-site/multi-campus programs of simulation need to be strongly based on the International Nursing Association for Clinical Simulation and Learning (INACSL) Standards of Best Practice: Simulation (2016). Otherwise, it is highly likely that simulation-based experiences are different from facilitator-to-facilitator and campus-to-campus.

Lastly, when evaluating action items, preliminary findings support research regarding formative and summative testing that while “all faculty are content experts, not all are expert evaluators” (Kardong-Edgren, et al., 2017). Interrater reliability was not established during this pilot study. Traditional simulation design with more students observing than participating presents challenges to conducting research regarding student role due to a clustered simulation design that is present. While the action items were present in each simulation as demonstrated by the moderate Cronbach’s alpha, evaluation would need to be individualized which provides challenges with time, resources, and feasibility to occur as part of a clinical course.

Findings from this pilot study revealed numerous challenges in conducting research regarding role in simulation, multiple simulations, inter-rater reliability, validity and reliability in educational research, and multi-site/multi-campus research. These findings, although inconclusive, contribute to ongoing discussions in nursing education that will assist researchers and educators when using simulation as an educational intervention. Additional item analyses can provide educators and researchers with information regarding instructional versus content sensitivity. For novice researchers and educators, these additional discriminants can inform how effective classroom, clinical, or simulation-based instruction is in comparison to content examined. Further, a discussion regarding integration of INACSL Standards of Best Practice for Simulation will further contribute to advancing simulation-based experiences in individual schools, multi-campus schools, and multi-site research. Finally, while pilot studies in research and doctoral programs may result in inconclusive data, the learning experience is crucial to developing an understanding of research processes, challenges, and limitations in nursing education research.