Evaluating Digital Interventions: Generally and Specifically

Sunday, 28 July 2019: 8:40 AM

Laura A. Szalacha, EdD
Morsani College of Medicine and the College of Nursing, the University of South Florida, Tampa, FL, USA

Digital health interventions are everywhere! One the one hand, they are great! Scalable and accessible apps to measure blood pressure, pedometers, heart rate monitors, period and ovulation trackers, mental health companions, and more. According to Statistica (2018), there are 47,526 apps in Apple’s App store. The digital health market is expected to reach 206 billion dollars by 2020.

On the other hand, there is a dearth of data on their development, feasibility, efficacy, and effectiveness. Anyone can build and sell an application. Nurse researchers need to develop and adhere to efficient methods of evaluating digital health apps across their intervention or application’s maturity life-cycle; that is, their developmental journey from prototype towards national-level implementation, without jeopardizing rigor.

This presentation is focused on the evaluation of digital health interventions, in general, integrating the recommendations from the World Health Organization (2016)1 and Murray et al’s (2016)2 framework for evaluating digital health interventions; and, specifically, as they were applied to the personalized telehealth Physical Activity intervention with fitness graded Motion Exergames (PAfitME).

Murray, et al (2016) suggests key questions before the construction of an app has even begun. Much like the dissertation proposals we read, when seeking the definition of the problem, the basic questions are: 1) Is there a clear health need that this app is intended to address? 2) Is there a defined population that could benefit from this app? and 3) Is the app likely to reach this population, and if so, is the population likely to use it? (Murray, et al, 2016, p. 4).

With affirmative answers to these questions, the production of the app begins. There are a number of characteristics to measure or monitor from the prototype, such as the app’s technical functionality and stability.

The earliest testing is to determine: 1) Whether the app works as intended (feasibility)? 2) Will the target patients incorporate and sustain the intervention into their lives (acceptability and usability)? 3) Will relevant stakeholders use it (demand)? and 4) Can the app be used with minimal burden (practicability)?

Having addressed these questions, what follows is 1) the evaluation of the application’s efficacy (achieves the intended results in controlled research setting). Subsequently, 2) the evaluation of the app’s effectiveness (achieves the intended results in a non-research (uncontrolled) setting.

This is right where Dr. Wang’s, “PAfitME intervention among Head and Neck Cancer Patients” is poised. We will describe the operationalization of these evaluation queries and explicate how we will follow the subsequent evaluations questions of 1) What are the key components of the application? Which ones directly effect on in pain and difficulty with voice/speech and on physical movement. How might they interact with each other? 2) What strategies should be used to support tailoring the PAfitME to participants over time? 3) Has the possibility of harm been adequately considered? 4) Has cost been adequately considered and measured? and 5) What is the overall assessment of the utility of this intervention?