Distinguishing Novice Competency: How Do Simulation Assessments Measure Up?

Friday, March 27, 2020: 9:30 AM

Mary Ann Shinnick, PhD, ACNP, CHSE
School of Nursing, UCLA, Los Angeles, CA, USA

Purpose: Novice nurses participate in simulation for training and competency assessment to ensure patient safety1-3. However, commonly used instruments vary in their ability to distinguish novice and expert performance4. The research question for this study was “which assessment instruments used in simulation are best able to distinguish between the novice and expert nurse as measured by sensitivity (able to distinguish the expert) and specificity (ability to distinguish the novice)”. This study compared 6 nursing simulation assessments: the Lasater Clinical Judgment Rubric (LCJR), the Quint Leveled Competency Tool (QLCT), Subjective Pass/Fail, Time to Task (TTT; completion of simulation objectives in 5 minutes) and, using eye tracking, Fixation Count (number of times looking at an area of interest [AOI]-the patient vital signs on a monitor) and Dwell Time (length of time looking at the AOI). The theoretical framework used for this study was that by Benner, Novice to Expert 5.

Methods: A comparative design study was done using known groups (Novice Nurses [senior prelicensure nursing students from two Universities; n = 39] and Expert Nurses [ICU or ER nurses; n = 40]. Each subject completed a Demographic Questionnaire and a 20-question Knowledge Pretest then participated solo in a simulation depicting a heart failure patient presenting with shortness of breath. This was followed by a parallel Knowledge Posttest. The simulation videos, captured using a wearable eye tracker (point of view, anonymous video) were coded and randomized. Fourteen volunteer reviewers from across the US remotely viewed and scored the LCJR, the QLCT and then Subjective Pass or Fail. Time to Task, Fixation Count and Dwell Time were done post-hoc by the researcher using eye tracking software. Data analysis included descriptive statistics, independent t-tests and sensitivity and specificity.

Results: There was a significant difference between the groups for age (p ≤ 0.01), but not for gender (p = 0.73), number of prior simulations (p = 0.12), Knowledge Pretest (p = 0.12) or Posttest (p = 0.27) scores. However, there was statistical difference between the groups on the LCJR (p = 0.05), the QLCT (p = 0.02), TTT (p = 0.02), Fixation Count (p ≤ 0.01) and Dwell Time (p ≤ 0.01) but not Subjective Pass/Fail (p = 0.19). While sensitivity was good for the LCJR and the QLCT (.80 and .70), the specificity of each was poor (.38 and .56). The sensitivity of Subjective Pass/Fail, TTT, Fixation Count and Dwell time was poor (.42, .48, .40 and .40). However, specificity for Subjective Pass/Fail and TTT was good (.72 and .77) and that of Fixation Count and Dwell Time was excellent (.92 and .92).

Conclusion: The LCJR and the QLCT may be good in establishing expert competency but they have poor sensitivity and should not be used in high stakes competency testing where identifying the less experienced nurse is important. More objective measures of competency such as TTT, Fixation Count and Dwell Time should be considered and explored further for their use in competency assessment of Novice nurses.

See more of: C 02
See more of: Research Sessions: Oral Paper & Posters