The Effect of Faculty Training and Personality Characteristics on High Stakes Assessment of Simulation Performance

Friday, 20 April 2018: 11:30 AM

Ann Holland, PhD, RN1
Deborah Bambini, PhD, WHNP-BC, CNE, CHSE2
Linda Blazovich, DNP, RN, CNE3
Vicki Schug, PhD, RN, CNE3
Jone Tiffany, DNP, RN, CNE, CHSE, ANEF1
(1)Nursing, Bethel University, St. Paul, MN, USA
(2)Kirkhof College of Nursing, Grand Valley State University, Grand Rapids, MI, USA
(3)Department of Nursing, St. Catherine University, St. Paul, MN, USA

Evaluating clinical competencies of nursing students is essential as faculty prepare them for the healthcare practice environment in which quality, safety, and patient outcomes are of highest priority. As greater emphasis is placed on high stakes assessment of clinical performance in nursing education, the training of faculty evaluators to assure good intra and inter-rater reliability of simulation performance is paramount. Assessment methods must be consistent with the NLN Fair Testing Guidelines for Nursing Education (NLN, 2012). Central to these guidelines is the definition of “fair”; that “all test-takers are given comparable opportunities to demonstrate what they know and are able to do in the learning area being tested” (p.3). Well-designed research studies that investigate all the factors needed for development and implementation of fair and reliable high stakes testing are necessary.

This presentation describes the results of a nationwide, experimental study conducted to test the effectiveness of a training intervention in producing intra and inter-rater reliability among nursing faculty evaluating student performance in simulation. The study is an extension of the NLN Project to Explore the Use of Simulation for High Stakes (Rizzolo, Kardong-Edgren, Oermann, & Jeffries, 2015) which evaluated the process and feasibility of using manikin-based high fidelity simulation for high stakes assessment in pre-licensure RN programs. The NLN project resulted in more questions than answers about simulation design, implementation, and performance assessment. Two questions that emerged from the NLN project were: (a) are there specific qualities associated with faculty who are comfortable and consistent in the evaluator role? and (b) what are the best methods to train raters? (Rizzolo, 2014).

These questions guided the research question for this experimental study: What is the effect of (a) a training intervention and (b) faculty personality characteristics on faculty ability to achieve intra/inter-rater reliability when evaluating student performance during high-stakes simulation? With NLN approval, the student performance videos and the Creighton Competency Evaluation Instrument (CCEI) used in the NLN project were used in the experimental study. The CCEI is a performance evaluation instrument that measures 23 skills related to assessment, communication, clinical judgment, and patient safety. The instrument was found to be a valid and reliable instrument to assess clinical competency in pre-licensure students in simulation in preparation for the National Council of State Boards of Nursing (NCSBN) National Simulation Study (Hayden, Smiley, Alexander, Kardong-Edgren, & Jeffries, 2014). The CCEI tool used in this study specifies minimum performance behaviors that are unique to the simulation scenario enacted in the student performance videos. This tool also asked participants to specify if they thought the students performing the simulation were competent. Participants in the study evaluated student performances expected to demonstrate end-of-program level of competence.

Consistent with the NLN project, high-stakes assessment was defined as “an evaluation process associated with a simulation activity that has a major academic, educational, or employment consequence . . .” (Meakim et al., 2013, p. S7). Clinical competence was defined as the ability to “observe and gather information, recognize deviations from expected patterns, prioritize data, make sense of data, maintain a professional response demeanor, provide clear communication, execute effective interventions, perform nursing skills correctly, evaluate nursing interventions, and self-reflect for performance improvement within a culture of safety” (Hayden, Jeffries, Kardong-Edgren & Spector, 2011).

A total of 102 faculty were recruited from nursing programs across the country. Inclusion criteria included full-time teaching status in an accredited associate degree or baccalaureate degree nursing program, experience with simulation, experience with clinical competency evaluation in clinical settings or simulation settings, education in evaluation and measurement, and proficiency with web-based technologies. Participants consented to complete study activities requiring up to 20 hours over a 2 ½ month period. Participants were randomized into control and intervention groups.

The study sought to build, through a training intervention, a shared mental model of end-of-program competence in a video recorded simulation performance among participants that had no prior relationship or shared curriculum, but that shared, in theory, a perspective on the clinical knowledge, skills, and abilities needed by students at the end of a pre-licensure RN academic program. The research team designed a basic orientation and an advanced evaluator training module that incorporated most elements of the training methodology established by Adamson and Kardong-Edgren (2012) to evaluate inter-rater reliability for the CCEI and used in the NCSBN’s national simulation study (Hayden et al., 2014). The intervention group received the basic orientation and the advanced evaluator training, while control group participants received only the basic orientation. After receiving the basic orientation or the training intervention, all participants proceeded to the experimental procedure in which student performance videos were evaluated using the CCEI. All participants completed the Clifton StrengthsFinder Inventory, a web-based assessment of normal personality from the perspective of positive psychology (Rath, 2007), and completed a survey that elicited their perspectives on the influence of their personality characteristics on student assessment. A total of 75 participants fulfilled all study activities, with equal numbers remaining in the control and intervention groups.

Descriptive and reliability quantitative analyses were performed to evaluate the effect of training on inter/intra rater reliability in the scoring of the CCEI. Qualitative analysis was conducted to identify themes reflecting the influence of faculty personality characteristics on performance assessment. Participant decisions about student competency underwent qualitative analysis to identify performance factors that influenced evaluation decisions.

The results of this study inform best practices in high stakes assessment using simulation. Descriptive and statistical findings will be presented that extend the results of the original NLN project and suggest principles and methods for training faculty evaluators. The qualitative findings suggest it is important for nursing faculty to be mindful of their strengths when evaluating student performance. The results of this study suggest important implications for the design, implementation, and facilitation of simulation when used for high-stakes assessment. Ongoing research about the multiple factors that influence high-stakes assessment of clinical simulation using experimental and multi-method designs is recommended.