Exploring Registered Nurses' Attitudes Towards Post Graduate Education in Australia: A Pilot Study

Monday, 28 July 2014: 8:30 AM

Linda C. Ng, LLB, BN, MN (CritCr)1
Anthony G. Tuckett, BN, MA, PhD2
Stephanie Fox-Young, RN, BA (Hons), GradDipEd, MEd, PhD2
Victoria Kain, PhD, RN2
Robert M. Eley, BSc, MSc, PhD, FSB, CBiol, CSci3
(1)Neonatal Intensive Care Unit, Royal Brisbane and Women's Hospital, Brisbane, Australia
(2)School of Nursing and Midwifery, The University of Queensland, Brisbane, Australia
(3)Emergency Medicine Research Program, University of Queensland – Princess Alexandra Hospital Emergency Medicine Research Program, Brisbane, Australia

Purpose:

Nursing education is a dynamic process designed to enable nurses to competently meet the healthcare needs of society.  Health system restructuring has been associated with diminishing postgraduate specialist nursing numbers worldwide.

The transfer of Australian postgraduate specialty nursing education from hospitals to the tertiary (higher education) sector took place in the late-90s (Chaboyer, Dunn, & Najman, 2000).  Postgraduate education in nursing has continued to grow over the years but the benefits to students, employers, patients and overall impact on practice remains unclear (Gijbels et al., 2010; Griscti & Jacono, 2006; Pelletier, Donoghue, & Duffield, 2005).  Valid instruments that monitor and evaluate nurses concerns are a central component in planning effective education, and are currently unavailable.

The objective of this study is to describe the development and design of an instrument to measure the Registered Nurses Attitudes Towards Post Graduate Education (NATPGE) in a representative sample of registered nurses in Australia.

Methods:

Items on the NATPGE were drawn from the literature review which was used to inform the content and the structure of the NATPGE questionnaire.  A number of processes have been undertaken to ensure the validity and reliability of the NATPGE questionnaire. 

1.       Content validity is a crucial factor in instrument development that addresses item rigour- that is, whether an item adequately measures a desired domain of content (Grant & Davis, 1997; De Vaus, 2002).  Four content experts (CE) who specialised in: specialist-nurse education, psychometric scales; development and analysis of instruments were selected to undertake judgment-quantification and agree on the final version of the NATPGE survey-instrument prior to testing its face validity. 

2.       Face validity, sometimes referred to as representative validity, is the degree of accuracy with which a measurement instrument represents what it is trying to measure (Bowling, 2002; Polit, Beck, & Hungler, 2001).   A convenience sample of 25 Registered Nurses (RNs) was selected from four major Queensland tertiary hospitals to assess the instrument content readability and relevance.

3.       Reliability is the consistency of a set of measurements or of a measuring instrument (Polit & Beck, 2010).  Pilot studies are used in different ways in social science research  and one of it can  be the pre-testing or 'trying out' of a particular research instrument (Baker 1994 pp 182-3) including testing its reliability.  A random sample of 100 RNs from the Nurses and Midwives e-Cohort Study (NMeS) were invited to participate in a test-retest pilot as part of the process of assessing the reliability of the online NATPGE.  To gauge the test-retest reliability, the instrument was administered at  two different  time points, 3 weeks apart, under similar conditions

Results:

The content and face validity was assessed using descriptive statistics.  For the test-retest reliability the 15 NATPGE questions were analysed on an item by item basis to calculate the intra-rater reliability using the weighted kappa (kw) statistic and its standard error (SE).  The kw implicitly assumes that all disagreements are equally weighted as are all agreements.  The reference values for the strength of agreement are in accordance with Altman (1991) (0.0- 0.2 as poor, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as good and 0.81-1.00 as a very good agreement). Data were analysed using Stata 12 (StataCorp. 2011, TX: StataCorp LP.).

Content and face validity

Overall both the CE and the RNs ranked the NATPGE, using the CVI, as a realistic training platform that would be useful for evaluating RNs’ attitudes towards postgraduate education.  The comments received from the CE resulted in some minor changes to the wording of some items for better clarity and simplicity.  No particular concerns were raised about any of the items by the CE. The CE was agreeable that the items were arranged in a positively and negatively worded sequence, which was intentional as to prevent response bias.

Reliability: Pilot Test

Complete data is available and was analysed for 36 of the 100 (36%) sample of RNs who completed the test-retest reliability of the NATPGE instrument.  Overall the results display an 80% fair to moderate kappa (kw = 0.29-0.57) agreement; however, there is some variability (kw = 0.0 to 0.79) between the test and retest kw for each individual question (Graph 1).

Conclusion:

The present research indicates very good content and face validity and whilst the test-retest reliability overall was moderate, several individual questions did have poor kappa values.  As such, we plan to refine the instrument, before its validation in a larger sample using factor analysis.  This work is currently being undertaken.