Preparing Nurse Educators to Evaluate Novice Nurse Competency: Collaborative Research Findings

Sunday, 29 October 2017: 2:45 PM

Susan Forneris, PhD
Center for Innovation in Simulation and Technology, National League for Nursing, Washington, DC, USA

Nurse educators are faced with the difficult task of insuring that their learners have the required knowledge and skills to deliver safe, quality care to their patients. Evaluating clinical competencies of nursing students and other healthcare professionals is essential as we prepare them for the ever-changing and rapid pace of the healthcare environment. Clarity about the specific behaviors that students need to exhibit to demonstrate competency is paramount. Equally important is the training of evaluators to assure good intra and inter-rater reliability. Evaluation of student learning is critical in nursing education, but paramount in assuring that students are competent to transition to professional practice. The fair testing guidelines (NLN, 2012) underscore the need for faculty to differentiate formative and summative evaluation within the continuum of teaching and learning. Nursing education needs to begin to prepare ethical, valid, and reliable measures for formative, summative, and high stakes clinical evaluation. The fair testing guidelines stress the importance of multiple sources of evidence to evaluate basic nursing competence particularly when making high stakes decisions. In the context of simulation, Rizzolo, Kardong-Edgren, Oermann, and Jeffries (2015) reports that findings from the NLN high stakes research study yielded challenges that reinforce these guidelines. Their research also yielded many important questions. “What are the best methods to use to train raters?” is just one of them (Rizzolo, Kardong-Edgren, Oermann, Jeffries, 2015).

Traditionally learner evaluations have been conducted while they are providing care to patients, but this practice has many drawbacks. Conducting evaluations in a simulated setting with standardized patients or manikins (computerized or static) can provide a controlled environment. Well designed and piloted scenarios and evaluation tools can provide the necessary validity and reliability. This presentation will present the results of a 3-year collaborative national research project that identified guidelines for best practices when designing and implementing simulation testing, selecting tools, and training raters. Implications for fair testing and assuring student competence will be discussed.