Research Synthesis of the State of the Science on Clinical Evaluation in Nursing Education

Saturday, April 9, 2016: 11:05 AM

Lynne Porter Lewallen, PhD, MSN, BSN, RN, CNE, ANEF
School of Nursing, University of North Carolina at Greensboro, Greensboro, NC
Elizabeth Van Horn, PhD, MSN, BSN, RN, CNE
School of Nursing, The University of North Carolina at Greensboro, Greensboro, NC

The purpose of this NLN-funded study was to conduct a research synthesis to determine the state of the science related to clinical evaluation in nursing education programs. There were two major rationales for this study. First was that in the seminal work about transforming nursing education through clinical teaching in nursing (Benner, Sutphen, Leonard & Day 2010), very little was written about the clinical evaluation process.  Second was that in preliminary examination of the literature on clinical evaluation, we found that of the few summary articles available, most were limited in scope, such as Cant, McKenna & Cooper’s (2013) summary on the use of the Objective Structured Clinical Examination (OSCE). The research synthesis method described by Cooper (2010) was used to guide the study.  Included in our synthesis were research studies that focused on clinical evaluation of any level of nursing student. Exclusion criteria included articles that did not report results of a study, studies that focused on practicing nurses rather than nursing students, studies focusing on human patient simulation, studies focusing only on student perceptions of or satisfaction with clinical evaluation, and articles not available in English.  A comprehensive literature search utilizing twelve computerized databases, the tables of contents or seven leading nursing education journals, the references lists of five review articles, and the abstracts of conference proceeding available at the Virginia Henderson library was conducted with the assistance of a Health Sciences librarian.  These searches resulted in a grand total of 226 articles, of which 77 met study criteria and were analyzed. Of the 77, 59 used quantitative methods, 8 used qualitative methods and 10 used mixed methods.  Our analysis methods consisted of narrative synthesis.  No groups of studies were found that were amenable to quantitative meta-analysis or qualitative meta-synthesis.  In the quantitative studies, the following designs were used: descriptive (n=11), correlational (n=7), comparative (n=15), quasi-experimental (n=11), experimental (n=6); and psychometric testing (n=9).  We then examined the studies (excluding the articles that focused on psychometric testing) to determine what level of evidence was represented by this body of work.  The number of studies classified according to Melnyk and Fineout-Overholt’s (2011) levels of evidence are Level 2 (one or more randomized controlled trials): 6 studies; Level 3 (controlled trial (no randomization): 11 studies; Level 4 (case-control or cohort study): 15 studies; Level 6 (single descriptive or qualitative study): 18 studies. The studies were categorized into topics. The topics were exhaustive but not mutually exclusive due to the multiple aims of some of the studies.  The topics included: Teaching methods; OSCE; Congruence; Faculty/preceptor issues with clinical evaluation; Essential clinical behaviors; Competence; Topic-based evaluation; Clinical reasoning; Instrumentation; and Decision making about clinical grade. Two areas for future research that stand out most in this study are the need to accurately and efficiently measure competence in the clinical area and the need for reliable and valid instrumentation. The largest number of studies located were on the topic of competence (n=31); all but two were conducted with undergraduate nursing students. The majority of studies had the aim of measuring global competence at the end of a nursing program; most used researcher-developed instruments and many used student self-report measures.  There is a need for a more standardized approach to the measure of clinical competence so results can be compared across programs, nationally and internationally.  Nursing education science is in its infancy in many areas.  The majority of the research designs used in the studies were non-experimental such as descriptive or correlational, with small convenience samples, which limits the strength of the evidence base of our science.  Nurse educators frequently conduct small studies with limited budgets that address areas of local concern but often do not contribute to the larger body of knowledge.  An important finding from this study is that nursing education research is being conducted globally.  Clinical evaluation is a concern world-wide; and research findings can potentially be applied in diverse settings.  By synthesizing research in this area, we can bridge the gap in evaluation of students from diverse cultures within each country and apply research findings to diverse settings, which will broaden the reach of nursing education research and strengthen the foundation of nursing education science.  This information can help nursing educators use evidence-based methods of clinical evaluation as a foundation for their practice.

This study funded by a National League for Nursing (NLN) Ruth Donnelly/Corcoran Research Award.