The DMLES: An Instrument to Assess Competence in Debriefing for Meaningful Learning

Friday, March 27, 2020: 3:15 PM

Cynthia Sherraden Bradley, PhD, RN, CNE, CHSE1
Brandon Kyle Johnson, PhD, RN, CHSE2
Kristina Thomas Dreifuerst, PhD, RN, CNE, ANEF3
Aimee A. Woda, PhD, RN-BC3
Jamie L. Hansen, PhD4
Ann Loomis, PhD, RN, CNEcl5
(1)School of Nursing, University of Minnesota, Minneapolis, MN, USA
(2)School of Nursing, Texas Tech University Health Sciences Center, Lubbock, TX, USA
(3)College of Nursing, Marquette University, Milwaukee, WI, USA
(4)Department of Nursing, Carroll University, Waukesha, WI, USA
(5)School of Nursing, Purdue University, West Lafayette, IN, USA

Purpose:

Formal training in a theory-based debriefing method followed by competence assessment has been recommended by nursing regulatory bodies because of the significance of debriefing in simulation (Alexander et al., 2015; INACSL Standards Committee, 2016; NLN Board of Governors, 2015). However, descriptions of necessary training and how to assess debriefing competence are lacking. Valid and reliable instruments that assess debriefing behaviors are essential for these criteria to be established.

Debriefing for Meaningful Learning (DML) is a debriefing method that helps students apply nursing knowledge and skills (Researcher, 2018), promotes the development of clinical reasoning (Forneris et al., 2015), and deepens learning by cultivating reflective thinking (Researcher, 2015). Although DML has been adopted for use nationally and internationally, it cannot be assumed that a debriefer would use the method competently despite receiving training (Jeffries et al., 2015) and there is no benchmark for competence (Researcher, 2019).

The Debriefing for Meaningful Learning Evaluation Scale (DMLES) was developed as a 31-item behavioral observational rating scale that assesses the application of DML (Researcher & Researcher, 2016). The DMLES demonstrated internal consistency (Cronbach’s alpha = 0.88), interrater reliability (0.86, total scale ICC [p<.01], and content validity (scale-level CVI 0.92). It was modified first into the 57-item Debriefing for Meaningful Learning Inventory (DMLI), a self-report measure of a debriefer’s understanding and application of DML. A latent class confirmatory factor analysis demonstrated that the DMLI was an initial valid measure of DML (Researcher, 2018). However, the instrument was challenging to use because the descriptors were not consistently interpreted and the criteria were too ambiguous for novice debriefers, therefore measuring debriefing behaviors remained challenging (Researcher, 2019).

The DMLES was modified a second time into a 20-item behavioral rating scale that can be used for both self-assessment (DMLES-Debriefer) and objective assessment (DMLES-Rater) to further understand how well debriefers apply DML. This session reports the research testing of the new iteration of DMLES. The aims of the study were to 1) psychometrically test the revised DMLES for both subjective and objective use, and to 2) evaluate whether there is a difference between how debriefers assess their debriefing and how experts in DML assess their debriefing.

Methods:

Thirty debriefers from five Midwest prelicensure nursing programs received a structured four-hour DML training. Within one month they recorded themselves debriefing prelicensure nursing students following simulation. Prior to and after viewing the recorded debriefing they scored themselves using the DMLES-Debriefer. DML experts also viewed and assessed their recording with the DMLES-Rater, and the two DMLES scores were compared.

Results:

Descriptive statistics were used to summarize DMLES data, and to examine sample normality and homogeneity between sites. Each DMLES item was analyzed using the intraclass correlation coefficient (ICC) and Cronbach’s alpha to determine item reliability and interrater reliability to answer the first aim, and the second aim was answered with independent, one-sample and paired sample t-tests.

Conclusion:

Psychometric properties of the revised DMLES instruments and significant findings from the debriefing assessments will be presented, along with implications for instrument use, competence establishment, regulation and teaching practice.

See more of: G 12
See more of: Research Sessions: Oral Paper & Posters