The onset of the COVID-19 pandemic precipitated a rapid shift to virtual care in health care settings, inclusive of mental health care. Virtual care, often referred to as telehealth or telemedicine, refers to the provision of clinical care using information and communication technologies and can be delivered synchronously (ie, in real-time, at a distance) or asynchronously (ie, separated by time and space) [,]. During the early phases of the pandemic, synchronous forms of virtual care, such as video- and telephone-based care, replaced in-person care as the leading modalities of practice in many health care settings [-].
Research evaluating virtual mental health care has consistently shown high levels of both client satisfaction [-] and provider satisfaction [,]. However, recent literature also highlights challenges with virtual care, including inequities in access to and engagement with virtual care [,], technology challenges [], lack of connection between clients and health care providers [], and concerns regarding privacy and safety []. Understanding clients’ perspectives on virtual mental health care quality will be critical to inform future policies and practices. In addition, it will be important to understand how these experiences vary across sociodemographic groups, particularly in light of long-standing inequities in health care access and outcomes that risk being reinforced by the widespread uptake of virtual care [].
To date, there is only 1 known validated measure of client experiences of virtual mental health care, specifically telepsychiatry [], and there are no known validated measures that can be used to evaluate client experiences across varied types of mental health care. Since the transition to virtual care, a majority of patients across multiple surveys indicate a preference for virtual care to continue to exist as an option [,], with many preferring a hybrid approach that combines both in-person and virtual modalities []. Validated means to understand client and family experiences of care will be critical to inform evidence-based decision-making regarding virtual care and hybrid modalities of practice moving forward.
This paper outlines the process to redesign and validate the Virtual Client Experience Survey (VCES) that can be used to evaluate client and family experiences of virtual mental health and addictions care.
At the outset of the COVID-19 pandemic, the VCES was adapted from a previously validated telepsychiatry survey developed and used within the TeleMental Health program at the Centre for Addiction and Mental Health (CAMH), a mental health and addiction hospital located in Toronto, Ontario []. This telepsychiatry survey was updated to respond to the pressing need to evaluate virtual care both within our hospital and externally. Several adaptations were made to extend the use of the tool beyond its original intended use with clients accessing psychiatric consultation and assessment through videoconference [].
Survey RedesignThe approach to developing and validating the original telepsychiatry survey is outlined in the paper by Serhal et al []. Our team of clinicians, researchers, and health care leaders reviewed the telepsychiatry survey and adapted it for use in the current context. All items were reviewed and updated, in particular, with the need to ensure relevance that is accorded with the expansion of virtual care provision beyond physicians and psychiatrists. For example, items on the newly developed VCES ask about the client’s experience with the virtual appointment instead of the telepsychiatry appointment. Another item asks whether the health care provider explained the risks and benefits of treatments or interventions rather than the risks and benefits of medications. After consultation with clinicians across multiple programs and services at the hospital, the survey was also updated to ensure applicability not only to clients but to family members as well. This was an important shift given the key role that families play in mental health and addiction care and recovery [,] and in supporting access to care through videoconference.
The transition to virtual care during the COVID-19 pandemic also necessitated the examination of virtual care from a digital health equity perspective []. Our team consulted with the CAMH Health Equity Office as well as the Provincial System Support Program (PSSP), which administers the Ontario Perception of Care Tool for Mental Health and Addictions (OPOC-MHA) [], regarding the addition of sociodemographic questions to facilitate health equity analyses. Furthermore, 5 items from the OPOC-MHA were also added to facilitate alignment and comparison between the 2 tools.
The VCES sociodemographic questions ask about gender, age, geographic region, whether the client was born in Canada, racial or ethnic group, illness and disability, and other factors. Clients and families were also asked about their comfort with technology, where they accessed the virtual care appointment from (eg, home, health care organization), the type of device they used to access virtual care (eg, computer, smartphone), and which videoconference platform was used. The telephone was included as an option on the survey, as CAMH, like many health care settings, began providing more services by telephone during the pandemic, particularly for individuals experiencing barriers to video-based care. Finally, as recommended by Serhal et al [], we added additional survey items to assess access to care, and physical and emotional safety during virtual care appointments. We also added an item about compassionate care, given our research group’s interest in compassion as a critical factor in the therapeutic relationship and inpatient and family experience of care [,].
The redesigned VCES consists of 22 items to assess the overall quality of care. These items are rated on a 4-point Likert scale (1=strongly disagree, 2=disagree, 3=agree, 4=strongly agree) with additional options including “not applicable” and “prefer not to answer.” The final question on the survey is an open-text question to elicit any additional feedback.
Conceptual FrameworkFollowing the adaption of the survey, the survey was revalidated using the 6 domains of health care quality of the Institute of Medicine (IOM) as a guiding framework. These 6 domains include being safe, effective, patient-centered, efficient, timely, and equitable []. The IOM domains were selected as a guiding framework to provide a comprehensive evaluation of client satisfaction and experience and to guide targeted quality improvement efforts across the different domains. The heath quality domain definitions [] have been adapted in below for the VCES.
Table 1. Health quality domain definitions.Health quality domainsOriginal definitionAdapted definition for the VCESa domainsSafe“Avoiding injuries to patients from the care that is intended to help them [].”Virtual care that is physically and psychologically safe and minimizes potential risks.Effective“Providing services based on scientific knowledge to all who could benefit and refraining from providing services to those not likely to benefit (avoiding underuse and overuse)” []Effective delivery of virtual care, inclusive of the use of virtual care technologies, to facilitate engagement in virtual care.Patient-centered“Providing care that is respectful of and responsive to individual patient preferences, needs, and values and ensuring that patient values guide all clinical decisions” []Virtual care that is collaborative, respectful, and responsive to client and family preferences, needs, and values.Efficient“Avoiding waste, in particular, waste of equipment, supplies, ideas, and energy” []Efficiency and ease in accessing virtual care.Timely“Reducing waits and sometimes harmful delays for both those who receive and those who give care” []Timely access to virtual care for clients and families, including limited wait times for services.Equitable“Providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status” []Virtual care that does not vary in quality based on personal characteristics and sociodemographic factors.aVCES: Virtual Client Experience Survey.
Question ValidationSurvey DisseminationThe VCES was piloted with a convenience sample of clients (16 years of age and older) and family members accessing a wide range of outpatient mental health and addiction services at CAMH through video or telephone. Survey data was collected using Research Electronic Data Capture (REDCap), a web-based application for securely collecting surveys and managing data [,]. The electronic survey took approximately 15 minutes to complete. Survey completion was voluntary, and all survey responses were anonymous. Survey data collection took place in 2021.
Information about the VCES and the process for survey administration was shared with clinicians and administrative staff using multiple channels, including the organizational intranet, email, support and sponsorship from clinical program directors, and program-level presentations by the project team. CAMH clinicians and administrative staff were asked to include information about the VCES, including the REDCap survey links, by email to all CAMH clients and families before their virtual care appointment. A template script for email communication to clients was provided, including a reminder that survey completion was voluntary and that all responses were anonymous. Clients and families were asked to complete a survey once per program or service that they participated in. The survey was made available to clients seeing providers across programs at the hospital, which includes multidisciplinary care providers (eg, physicians, nurses, social workers, occupational therapists, psychologists).
Content ValidityThe 22 Likert-scale items on the VCES were independently reviewed by a panel of 5 individuals. including clinicians, researchers, and health care leaders, to determine alignment with the following IOM health quality domains: safe, effective, patient-centered, efficient, and timely []. Equity, the sixth IOM domain, is assessed through a comparison of survey response options by sociodemographic group. The panel then met through videoconference to discuss discrepancies until a consensus was reached.
Factorial StructureTo test the factorial structure of the VCES, a confirmatory factor analysis (CFA) was adjusted in Mplus (version 8.2; Muthén & Muthén) [] using the method of weighted least squares with mean and variance adjusted chi-square test (WLSMV), which is recommended for ordinal outcomes []. Multiple imputation using the covariance method [], with 20 imputed datasets, was used to handle missing values, which were present in 45% of the surveys, with 80% of surveys having 3 or fewer items missing. The evaluation of the model relied on the fit indices RMSEA (root means square error of approximation) [], CFI (comparative fit index) [], TLI (Tucker-Lewis index) [], and SRMR (standardized root mean square residual) [], as well as the inspection of standardized factor loadings, variance explained, and modification index relative to each item. The initial factorial structure was defined following expert opinion, with minor respecification of the model based on modification indices, factor loadings, reliability, and item-total correlation. The respecifications were checked for alignment with the construct definitions and item interpretation. Cutoffs for fit indices in CFA and reliability are not universally accepted []. That said, based on results from Hu and Bentler [], TLI and CFI higher than 0.95, SRMR lower than 0.08, and RMSEA lower than 0.06 are considered acceptable.
ReliabilityThe reliability of the constructs was estimated by the Cronbach α coefficient [], a measure of internal consistency between the items. The corrected item-total correlation (correlation between the item and the total scale score with the item removed) and the α coefficient for the scale with the item removed were also used as sources of information as to how the items fit the factors. A Cronbach α of 0.7 or higher is considered good [].
Ethical ConsiderationsConsent to participate was implied. The survey contained an introductory section explaining the purpose of the survey, informing potential respondents that participation was voluntary and anonymous, and informing them that the findings would be presented as grouped or aggregated data for any presentation or publication. There was no compensation offered to clients and families for completing the survey. This project received ethical approval from CAMH Quality Projects Ethics Review (QPER-2021-000).
In total, the survey was completed 181 times, with 99 surveys completed by clients, 16 by support persons on behalf of clients, and 22 by family members or caregivers. Respondent information is missing on 44 surveys. Sociodemographic characteristics are included in .
Most respondents used a computer (113/179, 62.4%) or tablet or smartphone (58/179, 32%). Only 5.5% (10/179) reported using a telephone (audio only). The majority of respondents were either comfortable with technology (64/179, 35.8%) or very comfortable (85/179, 47.5%).
Table 2. Participant characteristics.CharacteristicsParticipants, n (%)RoleaCAMH: Centre for Addiction and Mental Health.
ReliabilityThe construct reliability, as measured by Cronbach α, was generally high. Timely was the only subscale with an α lower than 0.7; all others were above 0.8 (more details in ). In all cases, the corrected item-total correlation was higher than 0.3. Considering the items in the model, item 9 (effective) and items 2 and 10 (timely) had lower corrected item-total correlation, with all the other items having correlations around 0.7 or above. Within the “timely” factor, the low item-total correlation was likely reflective of the small number of items associated with this domain.
Table 3. Confirmatory factor analysis results and item statistics.Factors and survey itemsLoadingaSEbMissingc, %R2 dCorrelationeMeanf (SDg)Efficient (Cronbach αh>=0.87)aConfirmatory factor analysis standardized loadings.
bStandard error of the confirmatory factor analysis loading.
cProportion of the subject with missing values in the item. These items were imputed using multiple imputation for the confirmatory factor analysis analysis.
dVariance explained by the confirmatory factor analysis model.
eCorrected item-total correlation (correlation between the item and the sum score with the item removed).
fItem mean (Item values range from 1=strongly disagree to 4=strongly disagree).
gItem standard deviation.
hCronbach α (internal consistency).
iCAMH is referenced in the survey that was disseminated internally. The external version refers to “this organization.”
jItems from the Ontario Perception of Care Tool for Mental Health and Addictions (OPOC-MHA) [].
Confirmatory Factor AnalysisConfirmatory Factor Analysis loadings are shown in above. The model was adjusted after multiple imputations with 20 datasets. We show fit statistics using the mean and SD of these 20 datasets. The mean chi-square value was 437.5, with df=199 (P<.001). The mean RMSEA was 0.08 (SD 0.002), which is considered reasonable. The mean CFI was 0.987 (SD 0.001), the mean TLI was 0.985 (SD 0.001), and the mean SRMR was 0.04 (SD 0.001). These values are considered to be quite good. With the exception of item 9 in the “effective” factor, all the others have standardized loadings higher than 0.7.
The model shown in had 2 respecifications: item 19 was moved from “effective” to “patient-centered,” and item 18 was moved from “patient-centered” to “safe.” These changes were based on the modification index and discussions with the project team; however, they did not meaningfully change the fit measures. While item 9 does not fit as well in the “effective factor,” it was maintained as is, as it represented the best conceptual fit.
The sample size for structural equation models is not well defined because of the high number of parameters of interest and broad range of model complexity. Considering our model with 5 factors with an average of 4 indicators per factor and loadings around 0.8, Wolf et al [] found that a sample size of 120 tends to be sufficient for power above 80%. These results are negatively affected by the presence of missing values; as such, our sample of 181 is expected to have reasonable power for a CFA analysis.
This paper provides a valid and reliable measure to evaluate client and family experiences of virtual mental health and addiction care, focusing on 6 domains of health care quality. This survey has been used internally at CAMH to guide the next steps and recommendations for virtual mental health care and has been shared externally in both English and French to support quality measurement within other organizations. Information about the survey was shared through a webinar, and the survey was subsequently downloaded hundreds of times across Canada and internationally.
Implications for Virtual Mental Health CareGiven the widespread uptake of virtual care, this survey has broad applicability across settings that provide mental health and addiction care. The VCES can be used to guide targeted quality improvement initiatives across the 6 IOM domains of health care quality, including safe, effective, patient-centered, efficient, timely, and equitable care []. It also has the potential to be adapted to other health care specialties and contexts.
Considerations for Administering the VCESFrom our experience, we found the use of REDCap (Research Electronic Data Capture; Vanderbilt University) to be an effective means to disseminate the survey, collate results, and ensure anonymity. Some training and guidance regarding survey administration processes were required to ensure that the survey was delivered as part of the standard of care. We found it advantageous to delineate a time-limited window for survey dissemination and completion to limit time demands on both clients and clinicians. In terms of interpreting results, we set a cutoff of 3 (agree) and above as a positive result indicating high quality of care. Finally, there was a high degree of interest in receiving feedback on the survey results. We offered aggregated program-level reports once an adequate sample of surveys was completed in order to preserve respondent anonymity; this is particularly important as demographic questions attached to the survey could render individual results identifiable.
Next StepsWhile the findings from the VCES provided invaluable and timely feedback, it is important not to limit patient and family feedback to a survey. The open text option on the survey provided a breadth of feedback not necessarily addressed within the existing survey questions. Analysis of this qualitative data may spur future adaptations to the survey and, on a practical level, may steer quality improvement initiatives resulting from survey findings. Furthermore, ongoing patient and family engagement in interpreting survey results and prioritizing and guiding quality improvement will be essential to making meaningful and impactful improvements.
Patient experience of virtual care should also be considered alongside provider experience of virtual care. To complement findings from the VCES and to capture another key perspective within health care, our team has developed and validated the Virtual Provider Experience Survey (VPES). These complementary tools provide key metrics to inform how health care organizations and systems can move toward the “quadruple aim,” a well-recognized framework for health care that seeks to improve population health outcomes, reduce health care costs, and improve patient, family, and provider experiences of care []. The consistent use of standardized tools across settings is a valuable way to inform policy and program planning with respect to virtual care [] and to further our advancement toward the quadruple aim. Further practice-based research is also needed to examine the relationship between the experience of care and health outcomes for clients and families who access virtual mental health and addiction care and the cost-effectiveness of those services.
LimitationsWhile the VCES was validated with 181 survey responses, it is possible that some clients or family members completed the survey more than once if they were involved in multiple programs or services during the evaluation period. In addition, when administering surveys, there is always the risk of self-selection and response bias. We sought to mitigate this risk by asking clinicians and administrative staff to send the survey link to all clients and families accessing virtual care at CAMH. A response rate can be beneficial to ascertain the extent of potential bias; however, we were unable to determine the response rate, as it is unknown how many clients and families received the link in their email correspondence from clinicians or administrative staff. It is quite likely that those who have a higher degree of familiarity or comfort with virtual care were more likely to complete an online survey sent by email.
ConclusionWe sought to address a notable gap in the literature by redesigning and validating a measure of client and family experiences of virtual mental health care. Client and family perspectives are needed to evaluate the current state of mental health service delivery and highlight areas that need improvement. By effectively addressing challenges as they emerge, it is anticipated that we will continue to move toward hybrid modalities of practice that leverage the strengths and benefits of telephone, video, and in-person care to effectively respond to unique client and family needs and circumstances [].
The authors would like to acknowledge members of the virtual mental health team at CAMH for supporting the development and dissemination of the VCES, as well as CAMH clinicians and administrative staff for their support in sharing the VCES with clients and families. We would also like to acknowledge the clients and families who took the time to share their feedback on their experiences of virtual care. Generative AI was not used for any section of the manuscript. No funds, grants, or other financial support were received for this project.
The data that support the findings presented in this report are available upon request from the corresponding author.
AC contributed to study conceptualization, data analysis, writing, and review. AK contributed to study conceptualization, data collection and analysis, writing, and review. MS contributed to data analysis, writing, and review of the manuscript. AG contributed to data collection and analysis, review, and editing of the manuscript. DC contributed to the study conceptualization, data collection and analysis, review, and editing of the manuscript. ES contributed to the study conceptualization, data analysis, review, and editing of the manuscript.
None declared.
Edited by T de Azevedo Cardoso; submitted 11.06.23; peer-reviewed by J Sharp, SH Hong, A Makai, J Fitzsimon; comments to author 22.03.24; revised version received 23.05.24; accepted 23.09.24; published 03.01.25.
©Allison Crawford, Anne Kirvan, Marcos Sanches, Amanda Gambin, Denise Canso, Eva Serhal. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 03.01.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
留言 (0)