Usability Assessment Methods for Mobile Apps for Physical Rehabilitation: Umbrella Review


Introduction

The development of mobile health (mHealth) [,] solutions has seen exponential growth in recent times, driven particularly by the global pandemic [,]. mHealth has been heralded as a tool to provide access to quality rehabilitation input for patients outside of the time they are able to spend with clinicians [] and for patients in geographically remote areas []. Furthermore, similar to the observed trend of increased health information seeking on the internet [], the democratization of access to rehabilitation could be achieved by individuals actively seeking stand-alone mHealth solutions.

However, there is also increasing awareness that mHealth solutions available to clinicians and their patients often lack quality evaluations [,]. Many mHealth solutions only have short-term (<30 days) data from small sample sizes to support their effectiveness []. Moreover, only limited standardized outcome measures are typically used [,].

Usability is one key aspect commonly included in the evaluation of mHealth solutions [,,]. It has been touted as a determiner of the success of mHealth interventions []. Usability is often delineated from two related concepts: (1) the concept of utility that captures a system’s ability to meet user needs [] and (2) user experience is commonly understood as a broader concept of the experience of using an mHealth solution and may include measures of user beliefs []. However, usability may or may not be part of how user experience is captured, and many different definitions of usability appear in the literature [-].

The diversity in definitions of usability is mirrored by the diversity in usability models or frameworks. The 5 most commonly cited models of usability are that of ISO9241-11 [] and its revision []; ISO/IEC25010 []; Nielsen’s usability model []; and, in the context of health in particular, the People At the Centre of Mobile Application Development (PACMAD) model [,]. These models identify factors such as efficiency, or the resources expended to achieve a task; effectiveness, the level of accuracy and completeness of a task achieved using a mobile solution; and satisfaction or positive user interaction while operating the mobile solution as components of usability. The key difference between the PACMAD and the aforementioned frameworks is that these and other factors such as errors are seen as arising from 3 different sources: the user themselves, the task, and the context. This could be argued to be of particular importance for mHealth, where users may experience limitations such as perceptual or cognitive (aging) barriers []. These additionally impact on task demands and therefore represent an important consideration in the design of mHealth tools.

Usability assessment has been included in several good practice guidelines for the development of mHealth solutions [-], as well as in many evaluation frameworks [,], and can be regarded as a crucial step for evaluation at different stages of the typical mHealth development cycles. To date, however, no accepted standard for the assessment of usability of mHealth solutions exists. This means that researchers and developers of mHealth are faced with difficult decisions when designing mHealth evaluation procedures that strike the balance between responsiveness, reliability, and validity and are unable to compare existing solutions for the purpose of innovating. Further, clinicians are unable to be guided in their prescription of mHealth solutions, and there are significant barriers for consumers to engage with existing solutions.

Numerous systematic reviews have explored usability assessment approaches for various mHealth solutions in the context of physical rehabilitation. However, there is a lack of synthesis in this area of the literature. This may contribute to clinicians and developers needing to devote a significant amount of time and effort in analyzing and summarizing a large body of systematic reviews. An umbrella review can act as “a means for a rapid review of the evidence to address a broad and high-quality evidence base” []. Specifically, an umbrella review allows for a broader scope than individual systematic reviews that may focus on individual treatment options or individual conditions [-]. Hence, the aim of this umbrella review was to provide a “user-friendly” summary of the use of usability assessment instruments, or measurement tools, for researchers, clinicians, and consumers of mHealth irrespective of the specific area of application (eg, diabetes, tuberculosis, and sleep). Specifically, the objective was to summarize systematic reviews that investigated usability assessment instruments in mHealth interventions including those related to physical exercise rehabilitation. It is envisaged that such a summary will first aid researchers, developers, and clinicians to gain an overview of usability assessment instruments without needing to explore primary literature. Second, the presented summary may aide the development of mHealth usability assessment standards.


MethodsOverview

The umbrella review protocol was developed based on the Cochrane Handbook for Systematic Reviews of Interventions [] and other relevant methodology sources [] and was registered with PROSPERO (CRD42022338785). StArt (State of the Art through Systematic Review) software [] was used for the first- and second-level screening of result datasets and extracting relevant information.

Inclusion Criteria

Based on the objectives of the study, the following inclusion criteria were formulated: (1) articles published between January 1, 2015, and April 27, 2023 (the date range reflected the launch of Apple ResearchKit in 2015, which accelerated mHealth development and research []); (2) containing data on human participants; (3) with the “unit of searching” [] being “systematic reviews” [,] in order to reduce the effect of cumulative bias that may arise when including nonsystematic reviews; (4) examining usability assessment instruments of mobile apps for health professionals and for health care consumers; and (5) published in the English language to enable all contributing authors to perform screening, extraction, and synthesis of the search results. No post hoc modifications were made to the inclusion criteria. Systematic reviews of usability assessment instruments of other (mobile) solutions such as wearables, sensors, virtual reality, blockchain, Internet of Things, simulated data, or solutions for health care professionals only were excluded.

Search Methods and Search Terms

The following databases were searched with a combination of the search terms mobile application*, mobile app, usab*, usab* criteria, usab* evaluat*, systematic review, mhealth, mobile health, and physical exercise: PubMed, Cochrane, IEEE Xplore, Epistemonikos, Web of Science, and CINAHL Complete, combined using Boolean operators OR and AND and customized for each database in accordance with their filtering specifications. The result sets were imported into StArt []. The full search syntax for each database are presented in Table S1 in .

Data Collection and Analysis

A preliminary search of existing systematic reviews was conducted before finalizing the search terms in order to scope the extent and type of existing evidence []. The subsequent final search terms produced a result set that was more refined in focus and feasible in terms of the size of the expected result set. Following the removal of duplicates, 2-level screening was performed: title and abstract screening was performed by the primary author (SH), and a randomly selected subset of articles (118/1479, approximately 8%) was screened by a second author (VS; κ=0.87). Second-level, full-text screening was performed by the primary author (SH) using StArt for data extraction from the final result set. A data extraction form including basic reference details, as well as information such as population of interest and interventions studied, was discussed and agreed on by 3 authors (SH, GA, NS) before data extraction (see review protocol PROSPERO CRD42022338785 for more detail).

Quality assessment was completed using AMSTAR 2 (A Measurement Tool to Assess Systematic Reviews 2; Institute for Clinical Evaluative Sciences) [] by the primary author (SH) and a second author (VS) separately (κ=0.823). Any disagreement was discussed and resolved via consensus. In line with recommendations by Shea et al [], a discussion to determine AMSTAR 2 critical domains for this umbrella review occurred among 2 authors (SH, NS). Criteria 2, 4, and 7 were retained on the premise of constituting critical criteria as defined by the original publication []. The original critical criteria 9, 11, 13, and 15 were classified as noncritical for the purpose of this umbrella review due to pertaining to meta-analytic steps that none of the included systematic reviews performed. Instead, the following criteria were classified as critical: criterion 5 due to the variety of study designs and target user groups and/or clinical contexts included within the systematic reviews; and criterion 16 due to the context of mHealth usability, where the borders between academic enquiry and commercialization are more blurred and funding could constitute a significant source of bias and/or conflict of interest. A summary rating was produced according to recommendations by Shea et al [].

Finally, to gauge potential skewing of the data caused by significant overlap of primary studies contained within the systematic reviews included in this umbrella review [], overlap assessment was achieved via citation matrix [,] for the systematic reviews including the System Usability Scale (SUS) as an exemplar. The SUS was chosen because it is one of the most well-known instruments [] and preliminary searches of the literature demonstrated its frequency of use and reference.


Results

The initial database search returned 1479 results, which were reduced to 1375 after removal of duplicates (see ). Title and abstract screening resulted in 27 articles being included for full-text screening. A total of 15 of the full-text articles retrieved (see Table S2 in ) were ineligible because they did not review usability assessment measures, include sufficient detail on usability assessment instruments (eg, including binary information only), include a literature review, or examine nonhealth mobile service categories (see ).

Figure 1. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flowchart.

A total of 12 systematic reviews examining usability assessment instruments were included. Data were extracted (see Table S3 in ) as per the registered protocol. Across the systematic reviews included, there was coverage of primary studies from the start of records to 2020. Three of the systematic reviews included examined usability assessment instruments within a specific target user group (eg, users with diabetes [] and users living with a mental health concern [,]). The remaining 9 systematic reviews [,-] focused on usability assessment instruments used across different target user populations. Usability models or frameworks referenced included ISO [] (referenced in [,,,]), Nielsen [] (referenced in []), and the framework by the Canadian Institutes of Health Research and the Mental Health Commission Canada [] (referenced in []). Three (25%) of the systematic reviews [,,] included in this umbrella review did not refer to any theoretical framework (see Table S3 in ).

The systematic reviews included identified a total of 32 usability assessment instruments (see ) and a further 66 custom-made usability assessment instruments as well as hybrid custom-made instruments (see Table S4 in ). The most commonly referenced usability assessment instrument was the SUS [], followed by the IBM Computer Usability Satisfaction Questionnaire [] and the Usefulness, Satisfaction, and Ease of Use (USE) Questionnaire [].

Table 1. Overview of usability assessment scales identified by reviews included within this umbrella review.Assessment scaleReferenceSystematic review identifying scaleCountPsychometric properties as identified by systematic reviews included in this umbrella review



Internal consistency (Cronbach α)Reliability (intraclass correlation)Content validityStructural validityCross-cultural validityCriterion, convergent, concurrent, discriminant validityResponsivenessApp adaptation Abbott’s scale[]Nouri et al []1NRaNRNRNRNRNRNRAfter Scenario Questionnaire[]Inal et al []1NRNRNRNRNRNRNRApp adaptation Brief DISCERN[]Nouri et al []1NRNRNRNRNRNRNRApp adaptation CRAAP checklist[]Nouri et al []1NRNRNRNRNRNRNREase of Use and Usefulness Scale (EUUS)[]Kien et al []1NRNRNRNRNRNRNREnlight[]Azad-Khaneghah et al []1NRNRNRNRNRNRNRHealth Information Technology Usability Evaluation Scale (Health-ITUES)[]Azad-Khaneghah et al [], Muro-Culebras et al []20.85-0.92NoExpert panel and factor analysisExploratory and confirmatory factor analysisNoCorrelation with the Post-Study System Usability Questionnaire (PSSUQ)Statistically significant difference was demonstrated with the intervention groupHealth IT Usability Evaluation Model (Health-ITUEM)[]Nouri et al [], Vera et al []2NRNRNRNRNRNRNRApp adaptation Health-Related Website Evaluation Form (HRWEF)[]Nouri et al []1NRNRNRNRNRNRNRApp adaptation Health On the Net (HON) code[]Nouri et al []1NRNRNRNRNRNRNRIBM Computer Usability Satisfaction Questionnaire[]Azad-Khaneghah et al [], Georgsson [], Ng et al [], Wakefield et al [], Zapata et al []50.89NoExpert panelNoNRNoNoISOMETRIC[]Azad-Khaneghah et al []1NRNRNRNRNRNRNRiSYScore index[]Muro-Culebras et al []1NoNoExpert panelNoNRNoNoApp adaptation Kim Model[]Nouri et al []1NRNRNRNRNRNRNRMeasurement Scales for Perceived Usefulness and Perceived Ease of Use[]Muro-Culebras et al []10.97 (usefulness), 0.91 (ease of use)NoFocus groupExploratory factor analysisNoConvergent and discriminant validityNoMobile App Rating Scale (MARS)[]Muro-Culebras et al [], Nouri et al [], Vera et al []30.900.79Expert panelNoNoNoNoMobile App Rating Scale (user version) (uMARS)[]Muro-Culebras et al [], Nouri et al []20.900.66 (1-2 mo), 0.70 (3 mo)Expert panel and focus groupsNoNoNoNoNASA Task Load Index (TLX)[]Zapata et al []1NRNRNRNRNRNRNRNICE guidelines tool[]Azad-Khaneghah et al []1NRNRNRNRNRNRNRPerceived Useful and Ease of Use Questionnaire (PUEU)[]Azad-Khaneghah et al [], Inal et al []2NRNRNRNRNRNRNRPost-Study System Usability Scale (PSSUS)/PSSUQ[]Inal et al [], Niknejad et al [], Vera et al []3NRNRNRNRNRNRNRQuality Assessment tool for Evaluating Medical Apps (QAEM)[]Azad-Khaneghah et al []1NRNRNRNRNRNRNRQuality of Experience (QOE)[]Azad-Khaneghah et al [], Nouri et al []2NRNRNRNRNRNRNRQuestionnaire for User Interaction Satisfaction 7.0 (QUIS)[]Georgsson [], Saeed et al []2NRNRNRNRNRNRNRApp adaptation Silberg score[]Azad-Khaneghah et al [], Nouri et al []2NRNRNRNRNRNRNRSoftware Usability Measurement Inventory (SUMI)[]Azad-Khaneghah et al []1NRNRNRNRNRNRNRSystem Usability Scale (SUS)[]Azad-Khaneghah et al [], Georgsson [], Inal et al [], Muro-Culebras et al [], Ng et al [], Niknejad et al [], Nouri et al [], Vera et al [], Wakefield et al [], Zapata et al []100.911NoFocus groupExploratory and confirmatory factor analysisNoNoNoTelehealth Usability Questionnaire (TUQ)[]Georgsson [], Inal et al [], Niknejad et al []3NRNRNRNRNRNRNRTelemedicine Satisfaction and Usefulness Questionnaire (TSUQ)[]Wakefield et al []10.96 (video visits), 0.92 (use and impact)NoExpert panelExploratory factor analysisNoSignificant discriminant validity (Hispanic vs non-Hispanic)NoThe mHealth App Usability Questionnaire for interactive mHealth apps (patient version) (MAUQ)[]Muro-Culebras et al []10.895, 0.829, 0.900NoExpert panelExploratory factor analysisNoCorrelation with PSSUQ and SUSNoThe mHealth App Usability Questionnaire for standalone mHealth apps (patient version) (MAUQ)[]Muro-Culebras et al []10.847, 0.908, 0.717NoExpert panelExploratory factor analysisNoCorrelation with PSSUQ and SUSNoUsefulness, Satisfaction, and Ease of Use (USE) Questionnaire[]Azad-Khaneghah et al [], Inal et al [], Kien et al [], Ng et al []4NRNRNRNRNRNRNR

aNR: not reported as part of the systematic reviews included in this umbrella review.

Data regarding the psychometric properties of 9 (28%) instruments [,,,-,,] were included in the systematic reviews as detailed in . Internal consistency was generally good across these instruments, content validity was provided through expert panel or focus groups [,,,,,,], and exploratory and/or confirmatory factor analyses were used in evidence of structural validity [,,,,]. Details of convergent validity were included for 3 instruments [,,] (see ). Importantly, there was no evidence of reliability, responsiveness, or cross-cultural validity assessment for the usability assessment instruments referenced most often (ie, SUS, IBM Computer Usability Satisfaction Questionnaire, and USE Questionnaire).

Further, 8 (67%) of the systematic reviews [,-,-,] referred to usability assessment methods other than assessment scales. These included focus groups, heuristic evaluation, think-aloud protocols, and other methods (see Table S5 in ).

Quality assessment of the systematic reviews using AMSTAR 2 revealed that 8 (67%) articles [,-,-,] exhibited at least 2 critical weaknesses (see ), 3 (25%) systematic reviews [,,] were affected by 1 critical weakness, and 1 (8%) review [] had only noncritical weaknesses. The most frequently unfulfilled assessment criteria included the sources of funding enquiry for the included studies (AMSTAR criterion 10), accounting for risk of bias when interpreting results (AMSTAR criterion 13), use of a satisfactory technique for assessing risk of bias (AMSTAR criterion 9), and inclusion of a review protocol (AMSTAR criterion 2; see Table S6 in ).

Figure 2. Overview of methodological quality of reviews according to AMSTAR 2 (A Measurement Tool to Assess Systematic Reviews 2). * denotes critical criterion as determined for this umbrella review.

Finally, visualization of citation overlap for systematic reviews including primary studies using the SUS showed minimal overlap with 4 (10%) of 41 primary studies included in 2 of the systematic reviews (see Table S7 in ). With the exception of the citation of the original publication of the SUS instrument [], all other references included in the overview were unique to one of the systematic reviews included.


DiscussionPrincipal Findings

The exponential growth of research evidence related to the effectiveness of mobile solutions for rehabilitation [-] and the proliferation of technological solutions that afford new modes of treatment delivery [,] underscore the critical need for high-quality mHealth usability evaluation. Usability attributes such as efficiency, learnability, and memorability [] are particularly important to consider for mHealth users who may face challenges due to neurological compromise [], age-related issues [], or limited technology experience []. This umbrella review aimed to summarize usability assessment instruments for mHealth researchers, clinicians, and consumers to guide the development, assessment, and selection of high-quality mHealth tools.

The review identified, first, significant diversity and common use of custom-made instruments when usability assessment instruments were employed to evaluate mHealth tools for rehabilitation. Second, there was a notable lack of theoretical grounding for selection of the assessment of usability. Third, a scarcity of psychometric data for widely used instruments for mHealth usability assessment was evident in the systematic reviews included.

Heterogeneity of Instruments, Including Nonstandardized Instruments

Regarding the first critical point, a wide range of different instruments for the assessment of usability was evident across the systematic reviews included. This range included adaptations of preexisting usability assessment instruments for the context of mobile apps [,] as well as assessment instruments, such as the Mobile App Rating Scale (MARS) [], specifically designed for usability assessment of mHealth tools. In addition, both completely custom-made instruments and hybrids [] of preexisting instruments with custom elements were prevalent in the mHealth usability literature.

Although the use of hybrid assessment instruments and adaptations of preexisting assessment instruments may increase flexibility and thereby possibly improve the experience for respondents, the fact that most studies are limited in sample size prevents validation of hybrid and adapted instruments []. Alternative approaches to increasing flexibility and improving respondent experience while ensuring psychometric integrity are needed instead. A good example of this may be seen in the creation of a hybrid version of the SUS with the inclusion of pictorial elements, which increased respondent motivation []. Importantly, acceptable validity, consistency, and sensitivity were also evidenced, allowing future users of the hybrid measure to place greater trust in the quality of the data.

Theoretical Underpinning

Second, and similar to what has been found to be the case for individual-level studies assessing the usability of specific mHealth tools [], this review revealed that some systematic reviews examining the broader literature related to usability assessment lacked connection to theoretical models of usability. This observation resonates with previous criticisms of the quality of reviews of health-related mobile apps [] as well as research exploring technology adoption in fields beyond mHealth []. The latter exposed a reliance on a wide array of theoretical models of technology adoption in the literature and in some cases several within one review. To address this, it has been suggested that generic models for different service categories (eg, information and transaction) be developed []. A theoretically grounded, generic guide for mHealth usability assessment could similarly promote broader adoption and enhance comparison of usability across studies and use cases.

Psychometric Properties and Psychometric Testing

Third, systematic reviews included in our overview also reported significant limitations regarding the psychometric properties of preexisting instruments. For example, the MARS tool, which has been put forward as an instrument for standardized use in mHealth usability assessment [], lacks structural validity. Moreover, other constructs such as internal consistency and criterion validity have been documented as significant areas of future work for measuring the implementation of interventions [], with usability assessment playing a significant role.

Although consistent with previous research, this umbrella review did not specifically search for psychometric evaluations of usability assessment instruments; instead, it relied on summaries of psychometric evaluations presented as part of the included systematic reviews. As a result, it is likely that psychometric evaluation of other instruments is available. For example, psychometric evaluation of the popular USE Questionnaire [] is available and, consistent with our observation, has been shown to be affected by a lack of reliability and validity []. Furthermore, outside of the academic literature, there is a still greater portion of mHealth solutions on the market that likely will not have undergone empirical evaluation of usability.

Although some of the acceptable psychometric information was referenced for the SUS [], both the IBM Usability Satisfaction Questionnaire and the USE Questionnaire appear to lack reliability assessment. Reliability, or the freedom of measurement error [], may be regarded as crucial with regard to any metrics that are gathered after, rather than during, a user’s interaction with an application. The inability to separate true change in users’ estimate of the usability of mHealth tools from random variation, or measurement error, originating from recall bias [,,], for example, means that mHealth tool iterations [] are unable to be evaluated appropriately.

Moreover, the widespread use of custom-made and hybrid assessment instruments leads to the loss of the original instrument’s integrity and compromises its already-documented psychometric strengths []. Consequently, establishing the validity of results from individual usability investigations becomes challenging, and comparison across studies is difficult. Hence, there is an urgent need to assess the accuracy and appropriateness [] of individual usability assessment instruments to capitalize on the promise of mHealth tools in rehabilitation [,].

Another important psychometric aspect of usability assessment instruments that the systematic reviews included in this umbrella review highlight as missing from the published literature is responsiveness. mHealth development usually involves iterative design and testing cycles [,] with associated formative and summative usability evaluation []. Across the life of mHealth development, iterative cycles are likely to span different stages of development and be undertaken in different clinical contexts [,]. Integrating usability assessment into this process requires instruments that are generic enough to capture user responses to a wide variety of mHealth strategies but also fine-grained enough to possess sufficient responsiveness [].

Finally, with regard to the argument of lacking psychometric assessment, none of the preexisting mHealth usability assessment instruments referenced as part of the literature included in this umbrella review appear to have been informed by a breadth of cultural perspective or undergone cross-cultural validity testing. Given the global potential of mHealth to address inequities in access to and outcomes from rehabilitation [,], it is particularly important to establish cross-cultural validity of the usability assessment instruments employed in mHealth development. In addition, with the pervasiveness of technology, there is a certain element of unpredictability of the context in which mHealth tools will be trialed and used “in the wild” [,]. For that reason, an alternative argument could be made for innovative, culturally responsive methodology for mHealth tool design including usability testing []. A key difference in such attempts is user participation at multiple stages of development and responsiveness to expanding the stages of development as guided by stakeholders. This process likely includes constant negotiation and may be resource heavy but is arguably needed if the aim is to create mHealth solutions impacting indigenous outcomes, for example [,].

Considering the identified issues, including lack of theoretical grounding, common use of custom-made assessment instruments, and the scarcity of psychometric data for widely used mHealth usability assessment instruments, multimethod usability assessment appears paramount. This is consistent with recommendations made by a number of research groups [,,,] and reinforces the argument often advanced in favor of Ecological Momentary Assessment approaches, which are recognized for their advantage over retrospective assessment []. It is therefore proposed that standards be developed that specify the time points in the mHealth life cycle at which usability assessment is completed, with an emphasis on what methods to use. Moreover, these standards should mandate that individual assessment instruments are grounded in a theoretical framework and possess a minimum threshold for psychometric properties [,].

Recommendations

The establishment of a universal usability scoring system or algorithm would further facilitate the integration of these assessments into an overall framework []. It has been observed that, at present, less than half of existing evaluation frameworks include such a scoring system, but that such systems could support funding decisions [] and advance the vision of prescribable mHealth apps []. Although technological advancement often outpaces academic enquiry necessitating new approaches to mHealth evaluation frameworks [], usability factors are enduring [] and investing resources into establishing standards will therefore be valuable.

Limitations

In the context of an area of practice where the lines between commercial and academic work are blurred and usability assessment constitutes a common practice in the global commercial environment [], this umbrella review is limited to only including English language systematic reviews published within the academic literature indexed in the databases included. Furthermore, the quality of the included systematic reviews was found to be limited, and the fit of the AMSTAR 2 tool with methodological papers is not perfect. However, AMSTAR 2 could be argued to be more detailed than instruments developed for umbrella reviews specifically [], and, in line with the AMSTAR 2 recommendations [], the authors modified the list of critical criteria to reflect the specific aim of the overview. Finally, with regard to the review’s methodology, 2 limitations are of note. First, although the search syntax for this umbrella review included the keyword “physical exercise,” for pragmatic reasons, no validation step was included to confirm that all mHealth tools examined as part of the primary studies included within the systematic reviews included a physical exercise component. Regardless, the observations presented here are valid for mHealth tools for rehabilitation overall and provide valuable guidance to developers, researchers, and clinicians. Second, for practical reasons, data selection could only be performed by the primary author (SH) with a subset of articles being screened by a second author (VS). However, agreement on study selection was high (>80%), supporting the quality of the review.

Conclusions

There is considerable variety in approaches to and instruments for the assessment of usability in mHealth for rehabilitation, many of which lack theoretical foundation. Clinicians are therefore advised to critically evaluate mHealth literature and solutions, paying particular attention to the population in which usability testing was performed and the specific usability assessment instruments were employed. Future research efforts should be focused on producing high-quality systematic reviews and psychometric evaluations of usability assessment instruments. A collaborative effort between researchers, designers, and developers is essential to establish mHealth tool development standards. These standards should emphasize the incorporation of usability assessment instruments underpinned by a robust theoretical base. This umbrella review represents a valuable reference tool in this endeavor. Inclusion of multimethod usability assessment within the wider mHealth development cycle could also be part of these standards, which will ensure that we can capitalize on the widely heralded promise of mHealth to promote access to and outcomes from rehabilitation.

The authors thank the wider team of researchers and clinicians at the AUT Research Innovation Centre for workshop and input, and Exsurgo for valuable conversations on usability from a commercial perspective.

None declared.

Edited by L Buis; submitted 29.05.23; peer-reviewed by T Davergne, K Harrington, S Hoppe-Ludwig, S Nataletti; comments to author 11.02.24; revised version received 04.05.24; accepted 30.07.24; published 04.10.24.

©Sylvia Hach, Gemma Alder, Verna Stavric, Denise Taylor, Nada Signal. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 04.10.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.

留言 (0)

沒有登入
gif