Background: Mobile technologies in the form of apps can be excellent tools for the delivery of health care education, intervention, and management. Apps can help clinicians obtain current, evidence-based information for use in clinical practice (Free, et al, 2013; Stoyanov, et al, 2015; Arbour and Stec, 2018). Data shows that the use of mHealth apps can improve self-management of chronic conditions (VonHoltz, et al, 2015)and empower clients to improve their self-management of disease and wellness, while improving overall health care quality (Wildenbos, et al, 2016; Santos, et al, 2016). The pervasive use of mobile health technology has fueled the app-based economy, with over 160,000 health related apps in the major repositories. However, with is uptick in consumer use, practitioners often find it difficult to use or recommend apps in practice because of the lack of regulatory and evaluative mechanisms in app development . Several methods for the evaluation of specific components of mHealth apps have been proposed, however few methods provide a comprehensive and systematic review (Jake-Schoffman, et al, 2017). Because of this gap, Stoyanov, et al, developed the Mobile Application Rating Scale (MARS) to provide an objective and systematic rating of mobile health applications for clients and clinicians (Stoyanov, et al, 2015).
Overview of the MARS Tool/Methods: Stoyanov and colleagues developed the MARS tool to provide an objective and systematic method of rating mobile health applications for clients and clinicians, including a rating for the quality and accuracy of information provided within the app. The challenge was to develop a tool that could be translated from mHealth website evaluation to app evaluation in a way that was easy to use with minimal training. The MARS tool developers determined the themes and domains of the MARS instrument from a comprehensive search for app quality rating criteria. The tool developers initially identified 427 criteria related to app analysis or review. Examination of these criteria and deletion of duplicates resulted in 349 criteria, that were further refined into 23 distinct sub-categories. These 23 sub-categories led to the 23-item MARS tool (Stoyanov, et al, 2015). These 23 items are grouped into six categories, including one category on app classification (including target age group, technical aspects included, focus of the app, and strategies used), four categories related to app characteristics (engagement, functionality, aesthetics, quality and credibility of information), and one category on subjective app quality (Stoyanov, et al, 2015). Each of the subscales are averaged for a maximum mean score of 5, with 5 indicating “excellent” and 1 indicating “inadequate” for each subscale with each subscale score specifically defined for the question. The instrument was found to have high levels of internal consistency (Cronbach alpha= .9) and interrater reliability (2-way mixed ICC =.79, 95% CI 0.75-0.83) when applied to the independent rating of 50 mental health and well-being apps. Users of the MARS tool are encouraged to view the training video for the use of the MARS tool (https://www.youtube.com/watch?v=25vBwJQIOcE) and to “practice” reviewing several apps and comparing with colleagues to increase interrater reliability.
Apps in Clinical Practice/Results: As evidence by the literature, apps can be a valuable tool to assist patients and practitioners alike. Patients seek apps for information, goal setting , advice, treatment modalities and peer support (Aitkin and Lyle, 2015). Practitioners can engage technology in different ways such as exploration of clinical practice guidelines, use of prescribing references and for access to a variety of health care organization’s information repositories (Arbour, et al, 2018). The ability for practitioners to engage in meaningful evaluation of apps is beneficial in several ways. Practitioners can increase their confidence in recommending apps in practice to empower the patient to engage in self-management of disease and wellness. Practitioners can assist patients with app use aimed at behavior modification that can improve health outcomes. Practitioners are also in a position to evaluate the quality of information in an app. If the information quality is low, then practitioners can work to redirect patients towards more reliable sources. In addition, when apps are recommended by practitioners, the rate of retention of app use increases (Aitkin and Lyle, 2015).
Conclusion: Because the MARS tool requires training and expertise in mHealth, Stoyanov and colleagues simplified the tool thereby allowing clinicians without extensive mHealth training to evaluate individual mHealth apps. This adapted tool, uMARS, is a 20-item measure that contains 4 subscales including engagement, functionality, aesthetics, and information quality and 1 subjective quality subscale. After downloading the app to be reviewed, the uMARS tool instructions suggest that the user engage in or “play with” the app for at least 10 minutes to ensure a complete review and exposure to the app and its content. The reviewer is encouraged to evaluate the content and to utilize the available links and navigation for ease of use and functionality. Once the reviewer has had the opportunity to use the app, the 20-item tool is applied. The 5 score types (engagement, functionality, aesthetics, information quality and subjective quality) are averaged to determine subscale mean scores, which are then averaged to determine an overall mean uMARS score.
This simplified version of the MARS tool does not require specialized training in the utilization of the tool therefore making it an effective and efficient choice for practicing clinicians.Clinicians bring content knowledge and expertise to the app evaluation. They may desire to use the uMARS tool to rate apps that their clients bring to them, or that they learn of through other means. The clinician who is ready to recommend specific apps can feel confident when a client asks for help with certain health maintenance topics.