Congruence in Clinical Evaluation in Nursing Education

Friday, March 27, 2020: 8:30 AM

Elizabeth R. Van Horn, PhD, RN, CNE
Lynne Porter Lewallen, PhD, RN, CNE, ANEF
School of Nursing, University of North Carolina Greensboro, Greensboro, NC, USA

Purpose: Clinical evaluation is a complex assessment and required in most nursing programs. Unlike the process for evaluating didactic assignments, clinical evaluation often includes the input of numerous evaluators, such as students, preceptors, and faculty. Preceptors are commonly used in undergraduate nursing education programs, however, no standardized training programs for preceptors exist. Most Boards of Nursing have guidelines for preceptor use, but they vary considerably and do not include preceptor education on clinical evaluation of students1. The lack of standardized guidelines and preceptor education has the potential to negatively affect the accuracy of clinical evaluations among evaluators. The literature shows that ratings for students among clinical evaluators frequently lack congruence2-5. This inconsistency has the potential to affect the accurate evaluation of student competence, equity of evaluations, students’ successful completion of clinical courses, and program outcomes. The purpose of this presentation is to describe the state of the science on clinical evaluation in nursing students as it relates to congruence among evaluators.

Methods: As part of a larger NLN-funded research synthesis examining clinical evaluation6, the literature was systematically reviewed on the topic of clinical evaluation, yielding 15 studies published from 1981-2017 that examined the congruence among evaluators in clinical nursing education. The research synthesis method described by Cooper7 was used to guide the study, and the nursing research literature was searched through June 2019.

Results: Congruence was defined as the comparison of clinical evaluation outcomes between two or more types of evaluators. The type of evaluator varied among studies and included clinical faculty, preceptors, student’s self-evaluation, student peers, and in one study, a family member of a patient. The type of student evaluated included Associate’s, Bachelor’s, Diploma, and post-graduate. Seven of the studies were conducted in the U.S., and the remaining studies were conducted in five other countries, indicating this is an international problem. All studies used quantitative or mixed methods designs, and most used comparative analyses. Clinical experiences were predominantly in inpatient settings with varied specialty patient populations.

Most of the studies found incongruence among the different types of evaluators. When comparing student self-evaluation and faculty evaluations, findings were mixed. Incongruence between preceptor and faculty evaluations occurred in many studies, with the majority revealing that preceptors’ scores for student clinical performance were higher than faculty scores. Preceptors have identified several barriers to achieving congruence in evaluations including differing views of the definition of the measured competencies, and difficulty in discerning different levels of competence, and in providing constructive feedback2,8.

Conclusion: Congruence in clinical evaluations, especially between preceptors and faculty, is necessary to promote accurate and equitable evaluations of student clinical performance. These study findings indicate this is a problem identified across program types and in national and international settings. Strategies for increasing congruence among evaluators include preceptor education on clinical evaluation instruments and processes, the use of instruments that are valid for measuring student competence in the specific clinical setting, and a sustained and dynamic relationship among clinical evaluators to improve communication and increase the clarity of expectations.

See more of: B 16
See more of: Research Sessions: Oral Paper & Posters