Participant Engagement in Microrandomized Trials of mHealth Interventions: Scoping Review


IntroductionBackground

In the past decade, digital solutions that leverage mobile technologies to improve health and well-being have become increasingly popular and have emerged as promising adjuncts to traditional health care provision []. These so-called mobile health (mHealth) interventions generally involve the use of mobile technologies such as mobile apps, SMS text messaging, and wearable devices to improve patient health outcomes by delivering health-related intervention content. Mounting evidence suggests that mHealth interventions are largely effective for treating chronic health conditions [,] and for preventing unhealthy behaviors []. Effectiveness aside, it is not difficult to see why mHealth interventions are so popular; mHealth interventions are highly scalable and cost-efficient []. High rates of mobile ownership worldwide also signal the potential for mHealth interventions to reach a diverse audience, including the underserved; however, we must acknowledge that there are barriers to access (such as the lack of internet access) that prevent mHealth interventions from being truly equitable [].

Recently, more sophisticated mHealth interventions have been proposed to take advantage of the technological advances in mobile technology. These novel interventions (such as just-in-time adaptive interventions) tend to be multicomponent, that is, they tend to involve the manipulation of ≥2 components hypothesized to have a treatment effect. They also tend to be adaptive, in the sense that components of the intervention (eg, its content and timing of delivery) can change in response to some input data provided by the user (tailoring data collected from surveys or sensors). To make this concrete, let us consider a hypothetical mHealth intervention designed to reduce the severity of depression symptoms by sending daily motivational messages via SMS text messaging. The intervention is said to be multicomponent if both message content and timing of SMS delivery are thought to be active ingredients that can influence depression symptom severity. Such an intervention could be made adaptive if daily message content is tailored to the participant’s mood the night before such that if a given participant had high negative mood the night before, a more strongly worded motivational message would be sent the next day. Unfortunately, conventional randomized controlled trials (RCTs) cannot be used to develop and optimize these interventions because they do not allow researchers to separate the treatment effect of individual treatment components from the overall treatment effect. In addition, RCTs do not allow researchers to investigate time-varying effects, which is of interest when the goal is to identify the optimal time to administer an intervention component []. Therefore, if the RCT design is used to study the aforementioned hypothetical mHealth intervention, we will only be able to estimate the overall treatment effect of sending motivational messages on depression symptom severity and not the specific treatment effect of message content and timing of SMS delivery on the severity of depressive symptoms.

To address these limitations of the RCT design, several cutting-edge trial designs have been proposed in recent years. The microrandomized trial (MRT) design in particular has gained considerable traction as a way to optimize multicomponent and adaptive mHealth interventions (including but not limited to just-in-time adaptive interventions) [-]. Essentially, the MRT design involves the repeated random assignment of participants to different intervention options of a single or multiple intervention components; therefore, an MRT of our hypothetical multicomponent motivational SMS text messaging intervention would entail repeatedly randomizing participants to receive different types of motivational messages at different times daily. This repeated random assignment then facilitates the estimation of the time-varying causal effects of each specific treatment component [], that is, we can estimate the treatment effect of message content and timing of SMS text message delivery on the severity of depressive symptoms. Therefore, unlike RCTs, MRTs allow researchers to investigate the effectiveness of specific components of mHealth interventions, which could be informative for theory, future research, and intervention optimization. Notably, RCTs and MRTs are not mutually exclusive. One additional benefit of the MRT design is that it can be easily embedded within the treatment arm of a conventional RCT; therefore, the overall treatment effect and the effect of specific intervention components can be tested simultaneously.

Regardless of the trial design used, the measurement of participant engagement is integral to understanding the feasibility of mHealth interventions. This is because engagement with the constituent digital or nondigital intervention stimuli and tasks of an mHealth intervention is necessary for the individual to experience the intended distal health outcomes of the intervention [,]. The measurement of engagement, however, is not straightforward. Engagement, like many other psychological constructs, is an abstract and fuzzy concept that is not directly measurable (unlike, for example, the measurement of height). To measure engagement, researchers must first operationalize engagement, that is, define engagement in measurable terms []. To unpack how exactly engagement with mHealth interventions can be operationalized, it is instructive to consider how engagement can be measured, which kinds of engagement can be measured, and what levels of engagement can be measured.

Measures of Engagement

According to Yardley et al [] and then Short et al [], there are 7 methods of engagement measurement that researchers can use to obtain a sense of participant engagement in their digital interventions: self-report questionnaires, ecological momentary assessments (EMAs), qualitative methods, system usage data, sensor data, social media data, and psychophysiological measures. The measurement of engagement via self-report questionnaires and EMAs involves directly asking participants to report (via single items or questionnaires) their subjective experience of using the digital intervention or their use of the intervention. Qualitative methods of engagement, by contrast, involve the inference of engagement from qualitative sources (such as written responses and semistructured interviews). Measuring engagement via system usage data involves the quantification of how the digital intervention is used through metrics including, but not limited to, the number of log-ins, time spent on the intervention, and number of modules viewed. Engagement can also be measured by analyzing passively collected social media and sensor data if social media and sensors (eg, pedometers and heart rate sensors) are a feature of the intervention. Finally, psychophysiological measures of engagement involve the use of measures such as electroencephalography, eye tracking, or functional magnetic resonance imaging to infer engagement from neural and physiological activity.

Facets of Engagement

Engagement is thought to be a multifaceted construct composed of 3 distinct facets—physical, affective, and cognitive [,]. The physical facet of engagement refers to the “actual performance of an activity or task” []. The affective facet by contrast is thought to capture “a wide range of positive affective reactions to a task or activity, from feeling pride, enthusiasm, and satisfaction, to affective states that may underlie more enduring experiences of attachment, identification, and commitment” []. Finally, the cognitive facet of engagement is thought to refer to “selective attention and processing of information related to a task or activity” []. These facets represent distinct kinds of engagement that can be measured in mHealth interventions.

Levels of Engagement

When discussing the measurement of engagement in digital interventions, it is crucial to ask the question, “engagement with what?” []. This is because engagement measures can either be measures of engagement with the features and the active ingredients of the intervention or engagement with the health behavior of interest. Formally, Cole-Lewis et al [] termed engagement with the mHealth intervention as “Little e” and engagement with the health behavior of interest as “Big E”; elsewhere, the terms microengagement and macroengagement are used instead []. In essence, Little e and Big E represent 2 distinct levels of engagement, where the 7 methods of engagement outlined in the Measures of Engagement section can be applied to measure participant engagement in the mHealth intervention context.

This Study

Given the importance of engagement to mHealth interventions, researchers have endeavored to understand how engagement has been conceptualized and operationalized in studies evaluating mHealth interventions. For instance, Pham et al [] recently reviewed how engagement has been defined and measured in mHealth apps for chronic conditions. Perski et al [], by contrast, reviewed how engagement was conceptualized in digital behavior change interventions (their review was not limited to mHealth interventions; it included other digital interventions). Other recent reviews evaluated the measurement of engagement in mHealth interventions designed for specific health conditions [,]. However, none of these reviews examined mHealth interventions evaluated by MRTs, perhaps owing to the relative infancy of the trial design. Thus, not much is known about the state of participant engagement measurement in MRTs of mHealth interventions. Furthermore, it is not yet known what kinds of factors have been studied as determinants of engagement in these MRTs.

Therefore, we conducted a scoping review to map this relatively new research area. We chose to conduct a scoping review as we expected that only a handful of mHealth intervention MRTs have been conducted to date—too few to be meaningfully synthesized with a systematic review. This scoping review aimed to address 3 review questions:

What proportion of existing (or planned) MRTs of mHealth interventions to date have assessed (or have planned to assess) engagement?How has engagement been operationalized in existing (or planned) MRTs of mHealth interventions that have assessed (or have planned to assess) engagement?In existing (or planned) MRTs of mHealth interventions that have assessed (or have planned to assess) engagement, what kind of factors have been studied as determinants of engagement?
MethodsProtocol and Registration

The protocol for this scoping review was developed using the Joanna Briggs Institute Manual for Evidence Synthesis [] and was designed to ensure adherence to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews) guidelines []. The protocol and its appendices were prospectively registered with the Open Science Framework (OSF) on June 30, 2022 [].

Eligibility Criteria

We prioritized the inclusion of papers published in peer-reviewed journals. We included preprints, trial protocols, and dissertations (this was mistakenly left out of the “Types of Sources” section of our protocol []) only if no corresponding peer-reviewed journal articles were available. Conference abstracts were excluded from this scoping review.

All papers fulfilling these criteria to date were considered for inclusion if they were written in English and if they reported MRTs of mHealth interventions. We also included any secondary analyses of mHealth intervention engagement data collected from an MRT if the primary analysis (if available) did not report the assessment of engagement in detail. We defined mHealth interventions as any intervention designed to improve health outcomes through (though not limited to) the modification of health behavior (such as physical activity or treatment adherence), the improvement of patient knowledge, health monitoring, and the reduction of psychological distress via mobile technology such as SMS text messaging; mobile phone apps; or devices (including but not limited to smartwatches, wearables, and sensors) [].

As the review’s objectives concerned the assessment of engagement in MRTs of mHealth interventions, we included all studies in which authors explicitly attempted or claimed to quantitatively or qualitatively measure the participation in or use of mHealth interventions directly (by measuring participation in or performance of mHealth intervention activities or components) or indirectly (using measurements derived from non–intervention-related activities or components as a proxy), regardless of how they actually defined and measured engagement (eg, if they use alternative terms like adherence).

Information Sources and Search Strategy

We conducted a broad search for all published MRTs of mHealth interventions to date (the search was initially conducted on July 13, 2022, and again on September 28, 2022) by searching the following 5 bibliographic databases: MEDLINE (via PubMed), Embase, PsycINFO, CINAHL, and Cochrane Library. The search strategy was originally developed for MEDLINE, and we consulted an academic librarian from the National University of Singapore to ensure that the search strategy was comprehensive and sound. This search strategy was then translated for the 4 other databases (only syntax was changed to accommodate differences in search engines; keywords remained the same). Although only 1 broad search was eventually performed, it must be noted that we registered 2 separate searches in our protocol—1 for all published MRTs of mHealth interventions to date and 1 fine-grained search for MRTs of mHealth interventions that have assessed (or have planned to assess) engagement. During our search process, we realized that the latter search was redundant as it was nested within the former (because we used the Boolean operator AND between the mHealth intervention search terms and the engagement-related search terms). Therefore, we condensed the 2 planned searches into 1 by using the Boolean operator OR instead, such that our database searches indexed any MRTs that mentioned mHealth interventions or engagement-related terms. The comprehensive search strategies for all 5 databases (and their respective previous iterations) can be found on OSF [].

To search for gray literature and unpublished studies, we searched the reference lists of included studies for any additional sources not indexed by our database search. We also posted an open call for unpublished MRTs of mHealth interventions on Twitter and contacted known experts of the MRT design to request unpublished and file-drawered studies. Finally, we performed a search (similarly, this search was initially conducted on July 13, 2022, and again on September 28, 2022) of MRTs of mHealth intervention on 2 preprint servers (PsyArXiv and medRxiv; we added this search during our search process to ensure the comprehensiveness of our gray literature search) and on 2 clinical trial registries, ClinicalTrials.gov (as detailed in our protocol) and the International Clinical Trials Registry Platform (this was added during the search process as well). The following search terms were used: “microrandomised,” “microrandomized,” “micro-randomised,” and “micro-​randomized.”

Selection of Sources of Evidence

The results of the searches described in the previous section were imported into EndNote (version 20; Clarivate; we did not use Zotero as planned because of technical difficulties) for source selection and screening. The titles and abstracts of all potential evidence sources were first screened for eligibility. Eligible sources were then subjected to a full-text screening. Before the 2 screening stages, both authors discussed a subset of the search results (5 titles and abstracts and 4 full-text articles) to calibrate the selection of evidence sources. UL performed the screening using the eligibility criteria, and BC verified the screening at both stages. Any disagreements were resolved by consensus.

Data Charting Process and Data Items

As described in our protocol [], we developed an initial data extraction form (a Microsoft Excel [Microsoft Corporation] spreadsheet) to chart the data from eligible evidence sources to obtain the information necessary to answer our review questions. Both authors (UL and BC) piloted this initial data extraction form with 4 included articles to calibrate the charting process and to ensure that relevant data items were captured by the form. This form was continuously updated during the charting process through the discussion of the extracted results. UL performed data charting, and BC verified the charted data for all eligible evidence sources. Any disagreements were resolved by consensus.

The initial data collection form was designed to abstract the following information from each paper: whether the paper described a primary or secondary analysis of MRT data, type of paper, sample size of the MRT, sample characteristics, purpose of the study, type of mHealth intervention assessed, mode of delivery for the mHealth intervention, if engagement was or will be assessed, how engagement was operationalized (if assessed), if determinants of engagement were or will be assessed, and (if any) what determinants of engagement were or will be assessed; for comprehensiveness, we also charted any moderating variables and control variables (covariates) assessed.

After piloting the form and during the charting process, we included additional data items to capture the following information: primary and secondary (if any) outcomes of the study, randomization design of the MRT, frequency of microrandomization, and the overall duration of the MRT. The final version of the data extraction form is available on OSF [].

Synthesis of Results

To quantify the proportion of existing and planned MRTs of mHealth interventions to date that have assessed (or have planned to assess) engagement, we tabulated the number of evidence sources charted to have assessed or planned to assess engagement. The included evidence sources were grouped by their purpose and presented in a tabular format. The mHealth interventions of each included evidence source were categorized based on their target. We used the following categories: mental health promotion, smoking cessation, physical activity promotion, sleep improvement, dietary lapse prevention or weight management behavior promotion, gambling reduction, and alcohol use reduction.

To understand how engagement has been operationalized in MRTs of mHealth interventions, we sought to determine how included evidence sources measured engagement, which kinds of engagement they measured, and what levels of engagement they measured. To determine how engagement has been measured, we classified explicit measures of engagement from each included source according to the methods of engagement measurement outlined by Short et al [] described in the Introduction section. We combined the self-report questionnaires and EMA categories for parsimony, as they are largely similar methods of measuring engagement. To determine which kinds of engagement have been measured, we classified explicit measures of engagement by the facets (physical, affective, or cognitive) of engagement they appear to measure []. Finally, to determine what levels of engagement have been measured, we classified the explicit measures of engagement from each included source as Little e or Big E measures [].

To identify the factors that have been studied as determinants of engagement in MRTs of mHealth interventions, we extracted the variables of interest, moderators, and covariates from each model (with a measure of engagement as the dependent variable) tested in each included source. We then organized these variables into the following categories: notification related (eg, type of prompt sent), time related (eg, days since the start of the intervention or day of the week), psychological, societal, health behavior related (eg, alcohol use), contextual (eg, location data), physiological (heart rate), demographic, anthropometric (eg, weight change), or task related (eg, intervention-related activities).


ResultsSelection of Sources of Evidence

A total of 165 evidence sources were retrieved by our database search. After removing duplicates, 91 evidence sources were retained for further screening. During the title and abstract screening, 41 sources were excluded. Of the remaining 50 evidence sources, 28 were excluded at the full-text screening ().

Notably, 17 of these sources excluded at full-text screening were trial registrations (a total of 19 trial registrations were retrieved by our database search of the Cochrane Library). A total of 15 (88%) of these 17 sources had no published protocol, journal article, or preprint; we performed a manual Google search of their respective trial identification numbers to confirm this. In total, 2 (12%) of these 17 sources were duplicate trial registrations, that is, a corresponding protocol, journal, article, or preprint for each registration was already indexed by our database search. Therefore, only 22 evidence sources identified by our database search were considered eligible for this scoping review. No additional studies were identified and included from our planned searches of gray literature and unpublished studies.

Figure 1. Evidence source selection flow diagram. Characteristics of Sources of Evidence

All charted data described in the preceding section are available on OSF [] and [-]. We present a subset of the charted data that are pertinent to our review questions.

details the characteristics of each included evidence source. Of the 22 included sources, 12 (54%) were published journal articles, 8 (36%) were trial protocols, 1 (5%) was a preprint, and 1 (5%) was a dissertation. Only 1 evidence source was a secondary analysis of MRT data []. All included sources were published between 2018 and 2022. More than half of the included sources (14/22, 64%) were designed to evaluate the effect of intervention components. Physical activity promotion was the most common target of the mHealth interventions (8/22, 36%). Interventions were largely delivered via smartphone apps. The median sample size of the included MRTs was 110.5.

Table 1. Characteristics of included evidence sources.Source and intervention typeMode of deliveryEngagement assessed?Evaluate effect of intervention components
Aguilera et al [], 2021Mental health promotionSMSYes
Battalio et al [], 2021Smoking cessationAppYes
Figueroa et al [], 2022Physical activity promotionSMS, appNo
Goldstein et al [], 2021Dietary lapse prevention or weight management behavior promotionAppYes
Klasnja et al [], 2021Physical activity promotionSMSYes
Klasnja et al [], 2019Physical activity promotionAppYes
Kramer et al [], 2020Physical activity promotionAppYes
Latham [], 2021aSleep improvementAppYes
Jeganathan et al [], 2022Physical activity promotionSMSbYes
NeCamp et al [], 2020Physical activity promotion, mental health promotion, and sleep improvementAppNo
Spruijt-Metz et al [], 2022Physical activity promotionAppYes
Wang et al [], 2022Physical activity promotion and sleep improvementAppYes
Dowling et al [], 2022Gambling reductionAppYes
Rodda et al [], 2022Gambling reductionAppYesEvaluate strategies to improve engagement
Bell et al [], 2020Alcohol use reductionAppYes
Bidargaddi et al [], 2018Mental health promotionAppYes
Nahum-Shani et al [], 2021Smoking cessationAppYes
Nordby et al [], 2022Mental health promotionSMSYesEvaluate feasibility and acceptability of intervention
Militello et al [], 2022Mental health promotionAppYes
Yang et al [], 2022Smoking cessationAppYesDescribing engagement
Hoel et al [], 2022Mental health promotionAppYes
Valle et al [], 2020Dietary lapse prevention or weight management behavior promotionAppYes

aThis study was also designed to evaluate the feasibility and acceptability of its mobile health intervention.

bSMS text messages were delivered as smartphone and smartwatch notifications.

Synthesis of ResultsOperationalization of EngagementOverview

Of the 22 included sources, 20 (91%) explicitly included at least 1 measure of engagement; 2 (9%) studies did not claim to measure engagement at all [,]; NeCamp et al [] did not do so because of technical limitations. Though we did not chart the different terms used to refer to participant engagement, we noticed during our full-text screening that some studies did indeed use alternative terms in place of the term “engagement,” such as adherence [] and investment [].

Measures of Engagement

summarizes the measures of engagement used in each study. Across all included studies, system usage data were by far the most frequently used measure of engagement. Sixteen (80%) out of the 20 studies that explicitly measured engagement included at least 1 measure of this category. Generally, researchers used 2 types of system usage data: (1) responsiveness to self-reports, logs, or EMAs [,,,,,,,-,,] and (2) access or use of interventions [,,,,,-,].

Table 2. Measures of engagement used in microrandomized trials of mobile health (mHealth) interventions.SourceSRa or EMAbSUcSensor dataQualitative methodsSMdPPeEvaluate effect of intervention components
Aguilera et al [], 2021





Battalio et al [], 2021
✓✓



Goldstein et al [], 2021





Klasnja et al [], 2021
✓✓



Klasnja et al [], 2019





Kramer et al [], 2020





Latham [], 2021f✓✓




Jeganathan et al [], 2022





Spruijt-Metz et al [], 2022
✓✓



Wang et al [], 2022





Dowling et al [], 2022





Rodda et al [], 2022




Evaluate strategies to improve engagement
Bell et al [], 2020





Bidargaddi et al [], 2018





Nahum-Shani et al [], 2021✓





Nordby et al [], 2022✓✓



Evaluate feasibility and acceptability of intervention
Militello et al [], 2022✓✓




Yang et al [], 2022
✓✓


Describing engagement
Hoel et al [], 2022




Valle et al [], 2020




aSR: self-report data.

bEMA: ecological momentary assessment.

cSU: system usage data.

dSM: social media data.

ePP: psychophysiological data.

fThis study was also designed to evaluate the feasibility and acceptability of its mHealth intervention.

Sensor data were the second most common measure of engagement. Overall, 35% (7/20) of the studies that explicitly measured engagement included at least 1 measure of this category [,,,,,,]. Wang et al [], for example, measured the proportion of days in a week that participants wore the study’s FitBit smartwatch to track their step counts and sleep duration.

Engagement was measured via self-reports or EMAs in 20% (4/20) of the studies that explicitly measured engagement [,-]. Latham [] evaluated a sleep intervention designed to improve the regularity of wake times in college students via prompts. One measure of engagement in this study was participants’ self-reported adherence to the sleep-related suggestions included in the prompt. Nahum-Shani et al [] proposed to study how prompts to engage in self-regulatory strategies increased engagement in self-regulatory activities; researchers planned to measure engagement as self-reported engagement in self-regulatory activities during the hour after receiving a prompt. In their evaluation of a web-based intervention delivered via SMS text messaging, Nordby et al [] measured engagement as the self-reported frequency of practicing the coping strategies taught in the web-based intervention. Militello et al [] assessed the feasibility and acceptability of intervention prompts to encourage engagement in mindfulness activities guided by a mindfulness mobile app. Here, engagement was measured as self-reported performance of a mindfulness activity or exercise in the 24 hours after receiving an intervention prompt.

Only 1 study measured engagement with qualitative methods. In this study, researchers sought to describe engagement with an Acceptance and Commitment Therapy (ACT)–based mobile app in a clinical and a nonclinical sample []. The researchers inferred participant engagement by assessing whether participant responses reflected an understanding of the ACT intervention content. The following 3 indicators were used: the identification of the function of behavior, process alignment (whether the content of a given participant’s response is congruent with the core ACT process underlying the intervention prompt received), and the qualitative content of responses.

Only 8 (40%) out of the 20 studies that explicitly measured engagement used >1 method to measure engagement. Interestingly, no study used >2 methods. No studies measured engagement with social media data or psychophysiological measures.

Facets of Engagement

summarizes the facets of engagement measured by each included study. The physical facet of engagement was the most frequently measured facet of engagement; all 20 studies that explicitly measured engagement included at least 1 measure of this facet [-,-,-]. [-,-,-] provides examples of how this facet of engagement was measured in each included study.

Only 1 study included a measure of the affective facet of engagement []. Recall that the affective facet of engagement “captures a wide range of positive affective reactions to a task or activity,” including the “the affective states that may underlie more enduring experiences of attachment, identification, and commitment” []. By asking participants how likely they were to complete the intervention (ie, their commitment to the intervention), it could be argued that Latham [] measured this facet of engagement.

Similarly, only 1 study assessed the cognitive facet of engagement—recall that this involves the “selective attention and processing of information related to a task or activity” []. This processing of information related to a task was comprehensively measured by Hoel et al [] using the qualitative measures described in Measures of Engagement section.

Table 3. Facets of engagement measured in microrandomized trials of mobile health (mHealth) interventions.SourcePhysicalAffectiveCognitiveEvaluate effect of intervention components
Aguilera et al [], 2021✓


Battalio et al [], 2021✓


Goldstein et al [], 2021✓


Klasnja et al [], 2021✓


Klasnja et al [], 2019✓


Kramer et al [], 2020✓


Latham [], 2021a✓✓

Jeganathan et al [], 2022✓


Spruijt-Metz et al [], 2022✓


Wang et al [], 2022✓


Dowling et al [], 2022✓


Rodda et al [], 2022✓

Evaluate strategies to improve engagement
Bell et al [], 2020✓


Bidargaddi et al [], 2018✓


Nahum-Shani et al [], 2021✓


Nordby et al [], 2022✓

Evaluate feasibility and acceptability of intervention
Militello et al [], 2022✓


Yang et al [], 2022✓

Describing engagement
Hoel et al [], 2022✓

Valle et al [], 2020✓

aThis study was also designed to evaluate the feasibility and acceptability of its mHealth intervention.

Levels of Engagement

summarizes the levels of engagement measured in each included study. Of the 20 studies that explicitly measured engagement, 14 (70%) studies measured Little e only, 2 (10%) studies measured Big E only, and 4 (20%) studies measured both Little e and Big E. Clearly, measures of engagement in MRTs of mHealth interventions are most often Little e measures.

Table 4. Levels of engagement measured in microrandomized trials of mobile health (mHealth) interventions.SourceLittle eBig E
Yes or noExampleYes or noExampleEvaluate effect of intervention components
Aguilera et al [], 2021YesResponse rates to daily mood rating SMSNoN/Aa
Battalio et al [], 2021YesIf end-of-day logs for smoking are completedNoN/A
Goldstein et al [], 2021YesPercentage of interventions accessedNoN/A
Klasnja et al [], 2021YesAdherence to wearing the FitBitNoN/A
Klasnja et al [], 2019YesAdherence to activity trackerNoN/A
Kramer et al [], 2020YesWhether participants responded to first message of the chatbot in an intervention conversationNoN/A
Latham [], 2021bYesPercentage of sleep diaries completedYesSelf-reported adherence to intervention prompt’s suggestion
Jeganathan et al [], 2022YesNonadherence with recommendations for watch wear timeNoN/A
Spruijt-Metz et al [], 2022YesTime since FitBit was last wornNoN/A
Wang et al [], 2022YesProportion of days that daily step/sleep minutes were provided within a weekNoN/A
Dowling et al [], 2022YesEMAc complianceNoN/A
Rodda et al [], 2022YesEMA complianceNoN/AEvaluate strategies to improve engagement
Bell et al [], 2020YesWhether participants opened the intervention app in the hour after microrandomizationNoN/A
Bidargaddi et al [], 2018NoN/AYesWhether participants performed the self-monitoring intervention activity
Nahum-Shani et al [], 2021NoN/AYesWhether participants engaged in self-regulatory activities 1 h after randomization
Nordby et al [], 2022YesMinutes spent on the interventionYesSelf-reported frequency of practicing coping strategies taughtEvaluate feasibility and acceptability of intervention
Militello et al [], 2022YesOpening the applicationYesSelf-reported engagement with mindfulness exercises 24 hours after randomization
Yang et al [], 2022YesPercentage of EMAs completedYesPercentage of prompted strategies completedDescribing engagement
Hoel et al [], 2022YesProportion of submitted and nonblank logsNoN/A
Valle et al [], 2020YesProportion of intervention messages viewed before end of dayNoN/A

aN/A: not applicable.

bThis study was also designed to evaluate the feasibility and acceptability of its mHealth intervention.

cEMA: ecological momentary assessment.

Determinants of Engagement

presents the determinants, moderators, and covariates of engagement studied (if any) in MRTs that assessed or planned to assess engagement. Of the 20 included studies that measured engagement explicitly, 6 (30%) investigated the determinants of participant engagement. Of the 6 studies, 4 (67%) studies were designed to evaluate strategies to improve engagement and investigated the influence of notification-related variables on participant engagement as variables of interest [,-]. The remaining 2 (33%) of the 6 studies were designed to evaluate the effect of intervention components on health outcomes or to describe engagement. The former study assessed a time-based variable as its variable of interest—the causal effect of being in an intervention week on participant engagement []. The latter study assessed task-related variables (lapses in self-monitoring and behavioral goal attainment) and an anthropometric variable (weight change) as determinants of participant engagement [].

Of the 6 studies, only 3 (50%) studies designed to evaluate strategies to improve engagement investigated how the determinants of engagement were moderated. Two of these studies exclusively examined the moderating effect of time-related variables [,]. Concretely, Bell et al [] investigated how the causal effect of sending a push notification (vs not sending it) on engagement was moderated by the number of days in the study. Bidargaddi et al [], by contrast, investigated if the causal effect of sending (vs not sending) a push notification on engagement was moderated by the number of weeks in the study or by the day of the week (sent on a weekday or a weekend). The third study of this trio planned to study the moderating effect of a comprehensive set of physiological and psychosocial moderators representing vulnerability and receptivity, in addition to time-related moderators [].

Table 5. Determinants, moderators, and covariates of engagement assessed in microrandomized trials of mobile health (mHealth) interventions.SourceDeterminantsModeratorsCovariatesEvaluate effect of intervention components
Wang et al [], 2022Time relatedN/AaN/AEvaluate strategies to improve engagement
Bell et al [], 2020Notification relatedTime relatedDemographic, time related, and health behavior related
Bidargaddi et al [], 2018Notification relatedTime relatedTime related, notification related, and task related
Nahum-Shani et al [], 2021Notification relatedPsychological, societal, health behavior related, contextual, time related, physiological, and demographicDemographic and time related
Nordby et al [], 2022Notification relatedN/AN/ADescribing engagement
Valle et al [], 2020Task related, anthropometricN/ATime related, notification related, and anthropometric

aN/A: not applicable.


DiscussionPrincipal Findings

In this scoping review, we aimed to better understand the state of participant engagement measurement in MRTs of mHealth interventions. To do so, we quantified the proportion of existing and planned studies that have explicitly assessed engagement and invest

留言 (0)

沒有登入
gif