Background: Most nurses will never conduct their own research study; however it is an expectation that they will be able to make sense of a research paper or systematic review and be capable of interpreting the results and implementing practice change in order to participate in and deliver the evidence-based nursing care that is a core competency for the profession. Melnyk, Gallagher-Ford, Long, and Fineout-Overholt (2014) list the ability to critically appraise both primary and synthesized evidence, and to evaluate and synthesize new evidence as critical competencies for nurses, none of which are achievable without the ability to read and understand research. To date, despite research skills being included in most university curricula, nurses in recent studies still report that difficulties in reading and understanding research are a significant barrier to research implementation (Breimaier, Halfens, & Lohrmann, 2011; Melnyk et al., 2014; Ubbink, Guyatt, & Vermeulen, 2013). Nurses themselves identify poor experiences with trying to understand and use research as factors that contribute to a reluctance to utilise research (Estabrooks, Rutakumwa, O’Leary, Profetto-McGrath, Milner, Levers et al., 2005). This reluctance often leads nurses to prefer other sources of information, such as colleagues (Estabrooks et al., 2005). Meaningful engagement with evidence-based practice requires that all nurses have the skills to do more than simply ask colleagues when they have a practice problem. It seems likely that methods used in the research education of nurses may be a factor in this issue. Numerous research studies examining educational strategies to address this problem have been conducted however a systematic review using rigorous methods has not previously been done.
Methods: The methods of the review were specified in advance in a previously published protocol (Hines, Ramsbotham, & Coyer, 2014). The standard methods of the Joanna Briggs Institute were employed to conduct the review. A broad search strategy was employed across 14 databases and trial registries, including Medline, CINAHL, Embase, ERIC, Cochrane CENTRAL, Web of Science, clinicaltrials.gov, and others, in an attempt to identify all the relevant research, both published and unpublished. These searches identified 4545 potentially relevant papers, and after sifting of titles and abstracts, 96 papers were selected for retrieval. When the full versions of the papers were examined, 10 of the 96 retrieved papers were found to fully meet the inclusion criteria. These 10 studies were critically appraised by two independent reviewers using the Joanna Briggs Institute Meta-Analysis of Statistics and Review Instrument (JBI-MAStARI) critical appraisal tools and all 10 were found to be of sufficient quality to include. Data was abstracted from each of the included studies using the JBI-MAStARI data extraction tool. Although meta-analyses were planned for in the review protocol, data from the included studies were too heterogeneous (in terms of interventions, scales used to measure outcomes and time-points measured) to meta-analyse.
Inclusion Criteria: Participants of interest were post-registration registered nurses working or studying in any healthcare or educational setting. Studies of midwives, enrolled nurses, licensed vocational nurses and other similar nurse occupations and other healthcare professions were excluded unless the reported data clearly separated registered nurses results from other participants.
The review considered studies that evaluated the effectiveness of any style or structure of educational program, whether based in the workplace or an educational institution, conducted with the aim to improve or increase participants’ understanding of research literature. Study designs eligible for inclusion were randomised controlled trials, quasi-experimental trials, and pre-test/ post-test studies (single or multiple groups). No restriction was placed on the publication date of eligible studies. Only studies published in English were eligible for inclusion.
Outcomes of interest were: research knowledge or understanding, as measured by an objective test or assessment; ability to critically appraise research; use of research evidence in practice; and evidence-based practice self-efficacy, preferably as measured by a validated tool.
Results: The level of evidence overall was low to moderate. The majority of included studies were single-group pre-test/ post-test designs (n=7) (Billingsley, Rice, Bennett, & Thibeau, 2013; Chang, Huang, Chen, Liao, Lin, & Wang, 2013; Ecoff, 2009; Jones, Crookes, & Johnson, 2011; Reviriego, Cidoncha, Asua, Gagnon, Mateos, Garate et al., 2014; Swenson-Britt & Reineck, 2009; Tsugihashi, Kakudate, Yokoyama, Yamamoto, Mishina, Fukumori et al., 2013). One was a post-test only two-group comparison (Woo & Kimmick, 2000), and two were two-group quasi-experimental studies (Liou, Cheng, Tsai, & Chang, 2013; Morris, 1999). Included studies were conducted in Taiwan, Japan, Hong Kong, Australia, United Kingdom and United States. Participants were all registered nurses (n=453). The educational interventions were conducted in universities (n=6) and healthcare facilities (n=4). Most studies were published (n=9) with one unpublished study included (Ecoff, 2009).
Online learning was utilised by several included studies, however it was not found to be universally effective. Of the five studies that investigated virtual, online or e-learning, those that used interactive strategies rather than an online replication of the face-to-face coursework found statistically significant differences or improvements in participants’ research knowledge (p>0.001) (Liou et al., 2013; Reviriego et al., 2014), and critical appraisal skills (p>0.002) (Billingsley et al., 2013). Studies (n=3) where the online coursework was identical to the classroom content (recorded lectures uploaded online) found no difference in participants’ research knowledge (Morris, 1999; Tsugihashi et al., 2013; Woo & Kimmick, 2000).
Interactivity or activity-based learning appears to be an important element throughout the included studies, with virtual journal clubs, group-based interactive programs, face-to-face group learning, and clinical fellowship programs all showing evidence of effectiveness in terms of research knowledge, critical appraisal ability, and/or research self-efficacy measured at the end of the intervention (Billingsley et al., 2013; Chang et al., 2013; Ecoff, 2009; Liou et al., 2013; Swenson-Britt & Reineck, 2009). Interactive group learning also shows some evidence of a persistent effect, with one study reporting a statistically significant improvement in the intervention group one semester after the end of the intervention period (p>0.001) (Liou et al., 2013). The single included study of traditional lecture-style classroom learning found no statistically significant effect in improving critical appraisal skills (Jones et al., 2011).
Conclusion: Overall, the level of evidence generated by this review is low to moderate. Of the ten included studies, statistically significant findings of an effect in terms of research knowledge, self-efficacy and/or critical appraisal skills were reported in those that utilised interactive or group-based learning, whether online or face-to-face. Studies of traditional classroom activities translated to online learning did not show any effect in terms of improving research knowledge or critical appraisal ability. Future research should utilise more rigorous study designs that more clearly show the direction of an effect.
See more of: Research Sessions: Oral Paper & Posters