Paper
Wednesday, July 11, 2007
Examination of methodological and practical challenges to falls risk assessments
Deborah Dolan, RN, BSN, CNA, BC, R-6, Maine Medical Center, Portland, ME, USA, Joanne Chapman, RN, BSN, MSN, MEd, R1, Maine Medical Center, Portland, ME, USA, and Kristiina Hyrkas, PhD, LicNSc, MNSc, RN, Center for Nursing Research and Quality Outcomes, Maine Medical Center, Portland, ME, USA.
Learning Objective #1: understand the importance of pilot-testing prior to adopting any instruments for wider use in an organization. |
Learning Objective #2: discuss our findings and how we utilized them when choosing the ‘best possible’ falls risk assessment tool in our organization.
|
Background. Although the national and international research community has spent a lot of time and effort developing falls risk assessment instruments, predicting patients who are at risk is a challenge. Examination of the issues related to sensitivity, specificity and reliability of the risk assessment tools for the patient populations is critical. Developers of instruments have often tested the tool in the same population in which the instruments were developed, and thus the generalization of the findings and usefulness of the instrument outside that population may be limited.
Aim. The aim of this presentation is to examine the issues of reliability, sensitivity and specificity of four falls risk assessment instruments: Maine Medical Center, New York-Presbyterian, The University Hospital of Columbia and Cornell, Morse Fall Scale and Hendrich II Falls Risk Scale.
Method. Nurses who acted as data collectors received a one hour educational session. The tools were used simultaneously in fourteen units. We collected 1540 falls risk assessments in May-June 2006. Descriptive statistics were used for data analysis.
Findings.
Using incidents reports, we calculated the sensitivity and specificity of the instruments applied to four different time scenarios. The NY instrument had the highest sensitivity (100%) when the assessment and fall incident occurred during the same day. On the other hand, if the assessment was completed one day before the fall incident, the Morse instrument (high risk 78.6%) and the NY instrument had (high risk 78.6%) the highest sensitivity. The specificity for these instruments was 37.5% and 24.9% respectively.
Conclusions and Implications.
The findings of the study demonstrated that the sensitivity and specificity varied considerably depending on the timeframe that was used for examination. The findings demonstrated the important to test the sensitivity, specificity and reliability of the instrument(s) with a pilot study.