Reflective thinking has been studied as a learning outcome measure in undergraduate nursing programs in both the didactic and clinical settings. Increasingly, simulation is a key component of many undergraduate nursing curricula. While there are a variety of instruments evaluating student learning in simulation, few evaluate the level of reflective thinking ability in the context of the simulation-learning environment for pre-licensure baccalaureate nursing students. Simulation is the ideal clinical setting for reflective thinking because there is time dedicated during debriefing sessions for students to immediately reflect on their thoughts and actions during the simulation. Tutticci (2017) used the Reflective Thinking Questionnaire (RTQ) to explore nursing students’ reflective thinking skills after facilitated high fidelity simulations and de-briefings. The RTQ is a 16-item questionnaire with 5 point Likert scales to assess student’s reflective thinking. Confirmatory factor analysis (CFA) revealed the tool had poor construct validity, but fair to good internal consistency (Cronbach’s α .51-.71). The purpose of our study was to further explore the content validity and reliability of the RTQ in pre-licensure baccalaureate students in their first scheduled simulated clinical experience, which we will refer to as simulation.
Methods
Upon receipt of IRB approval, we recruited a convenience sample of 99 first semester pre-licensure nursing students using electronic methods during their regularly scheduled clinical simulation experiences. Recruitment information and voluntary participation was reinforced at the time of data collection. After data collection was complete descriptive and confirmatory factor analyses were conducted.
Results
A final sample of N=92 were used for the analysis due to missing data. Confirmatory factor analysis results support the psychometric properties of the RTQ as an instrument having four latent variables Habitual Action (HA), Understanding (U), Reflection (R), and Critical reflection (CR).
For the 16 item RTQ four latent variables were identified reflecting the following variables:
- Habitual action (HA): learners’ level of performing certain behaviors with little conscious effort
- Understanding (U): learners’ comprehensive understanding of the action performed without relating to other situations
- Reflection (R): learners’ ability to critique, examine, and explore one’s own assumptions and/or experiences.
- Critical reflection (CR): learners’ ability to reflect on their performance, consider multiple perspectives during their reflections, and integrate the various perspectives in a transformative manner.
Initial model fit indices for the 16-item four-factor model (χ2 = 169.76; df=86; p < 0.01) do not support goodness of fit. However, in CFA, when χ2 is significant, other model fit indices (i.e. RMSEA, RMSEA 90% CI, RMSEA probability, CFI, and TLI) must be considered. The results of these indices: RMSEA= 0.102 (<.05 is acceptable); RMSEA 90% CI = 0.08; RMSEA probability < .05 = 0.00; CFI = 0.77 (>.90 acceptable); TLI = 0.71 (>.90 acceptable) support the χ2 results suggesting that this four-factor model does not reflect goodness of fit.
Table 1 Demographic Profile Distribution (N = 92) |
||||
Demographic |
Value |
Frequency |
% |
|
Gender |
Male |
10 |
11% |
|
|
Female |
82 |
89% |
|
Age |
18 – 25 years |
66 |
72% |
|
|
≥ 26 years |
26 |
28% |
|
Race |
White |
46 |
50% |
|
|
Black |
46 |
50% |
|
Marital Status |
Single |
73 |
80% |
|
|
Not Single |
18 |
20% |
|
Previous Simulation |
No |
85 |
92% |
|
|
Yes |
7 |
8% |
|
Table 2 Comparison of studies using the RTQ and the test results |
|||||||
Study |
N |
Sample |
Learning Environment |
RTQ Subscale Cronbach’s α |
|||
|
|
|
|
HA |
U |
R |
CR |
Daniels (2018) |
99 |
1st semester pre-licensure nursing students |
Simulation |
.65 |
.70 |
.71 |
.75 |
Tutticci (2017) |
346 |
3rd year pre-licensure nursing students |
Simulation |
.70 |
.68 |
.61 |
.81 |
|
|||||||||
Table 3 |
|||||||||
Comparison of CFA Model Fit Indices for the RTQ by Research Study |
|||||||||
Study |
X2 |
df |
p |
RMSEA |
90% CI, RMSEA |
CFI |
GFI |
IFI |
NFI |
Daniels |
169.76 |
86 |
.00 |
0.10 |
.08, .13 |
.77 |
|
|
|
Tutticci |
2.203 |
* |
.00 |
0.06 |
.05, .07 |
Reported at or greater than .90 |
Discussion
The RTQ offers educators an instrument to assess reflective thinking in simulation based nursing education. This study tested the reliability of the previously developed four-factor model measuring reflective thinking levels in baccalaureate nursing students in clinical simulation environments using Confirmatory Factor Analysis (CFA). Although overall (CFA) findings suggest poor model fit, results are consistent with Tutticci (2017). Specifically, critical reflection demonstrates agreement between both studies. This supports reliability on this particular subscale. Application of this instrument in the pre-licensure nursing student population after participation in simulations supports the need for continued refinement of the RTQ instrument.
Limitations of this study include sampling bias due to convenience sampling, measurement bias due to self-report and missing repeated reliability testing after adjusting terminology to capture simulation specific data. Future work needs to strengthen this instrument including increasing the sample size to increase power and expanding the study to include multiple sites to improve generalizability to different types of pre-licensure nursing programs.
The RTQ requires further refinement and testing in the simulated learning environment to measure reflective thinking as a learning outcome in pre-licensure baccalaureate nursing students. Items in the habitual action and understanding constructs require further exploration due to internal consistency findings in this study.
Conclusion
The findings of this work offer guidance for future use of the RTQ to assess reflective thinking as a learning outcome in the pre-licensure nursing population. Further testing of the RTQ will produce evidence to inform application of this instrument to assess reflective thinking in simulated learning environments. Such evidence would support meeting the need of valid and reliable instruments in simulation to measure learning outcomes.