Acceptability of artificial intelligence for cervical cancer screening in Dschang, Cameroon: a qualitative study on patient perspectives

The following discussion section will primarily address key findings that are important for HCPs to consider, when using a smartphone including AI-aided CAD tools, to avoid negative impacts on the uptake of CC screening by the women or on return to follow-up. In this study, the acceptance of women in Dschang regarding a CAD tool for CC relying on AI was studied. The TAM was identified as an appropriate model to investigate the various factors influencing the acceptance of the AI-based diagnostic aid. We identified that, in the selected study setting, patients’ acceptance was mainly impacted by perceived usefulness, privacy concerns and trust. Perceived ease-of-use was not a main concern probably because patients are not end-users of the application themselves and so are unaffected by its usability. The various themes that were revealed during the study were categorized according to the TAM, as depicted in blue in Fig. 3.

Fig. 3figure 3

Summary of the factors explored during the study (denoted in blue) using the modified TAM

Overall participants perceived the AI-based CAD tool on a smartphone as a support for enhancing the diagnoses of CC. The findings can be compared in theory on two distinct levels: first, the use of the smartphones to diagnose CC and second, the use of AI to diagnose the disease. However, many participants in the study made no difference between the use of smartphones and an AI-based technology. Nevertheless, the following section will highlight important factors and differences between their perceptions related to the use of smartphones and AI-based technology when possible.

Acceptability of smartphones

Regarding the acceptability of smartphones, a study by Mungo et al., showed that smartphone-based cervicography is highly acceptable in HPV-positive women living with HIV in Western Kenya [17]. This observation was confirmed in our study, but it is important to point out that a few women expressed apprehension about the radiation emitted by smartphones and thus were less inclined to accept the method. Similar findings were encountered in another study in Kenya that analysed HCPs and patients’ perspectives on using an mHealth tool for eye care [10]. Indeed, in this study, a minority of patients were concerned about the negative health impact of mobile phones and therefore preferred traditional methods instead. Even if this perception was not shared by most women in our settings, it is important that HCPs are aware of this barrier. In a study aiming to understand and prevent health concerns about emerging mobile health technologies, Materia et al. highlighted the importance of evaluating and addressing health concerns related to use of smartphones and developing evidence-based communication strategies to limit them [18].

Professionals, therefore, need to be trained adequately to address safety concerns, including misconceptions, prior to the implementation of new technologies. Effective patient-centred communication can help recognise and address these concerns. Moreover, misconceptions especially need to be explored in future studies to better understand their potential impact on CC screening.

Acceptability of AI-based CC screening

Since AI-based CC screening is a novel method, there are limited studies concerning the acceptability of this technique by patients, especially in LMICs. In this study, a few women, especially those with a higher level of education, exhibited reservations about the AI-based tool’s potential to malfunction (either due to issues in the smartphone or the impreciseness of the AI). Some also expressed concerns of the inexperience of the HCP handling the tool. However, most women viewed the AI-based tool as an enhanced method to improve diagnostic precision of HCPs.

Firstly, it needs to be acknowledged that our results are in line with the current literature on the use of AI in the field of cancer diagnosis. Studies report that patients tend to accept AI more easily if a dangerous disease such as cancer can be avoided [19]. Other studies exploring the perceptions of AI in the diagnosis of breast and skin cancer concluded that it was an acceptable adjunctive technology for patients, if the provider-patient relationship was preserved, but not an acceptable substitute for physicians or radiologists [15, 20, 21]. It is important to note, however, that these studies were conducted in high-income countries, i.e. the United Kingdom, United States, and Italy and no studies in SSA could be identified.

With respect to HCP competence concerns encountered in our study, similar perspectives were reported by other studies exploring the acceptability of similar CAD tools in various healthcare domains. A study in Uganda regarding HCPs’ perspectives on the acceptability of a mobile health tool revealed a concern that using a mobile application in front of patients and their families would undermine their trust in the HCP’s ability to diagnose and treat [9]. This perception was echoed in another study in the UK where HCPs were worried that using mobile tools during patient interactions would be perceived as unprofessional [11]. However, our findings demonstrate that if the usage of smartphones is explained to the patients beforehand, most participants had continued trust and confidence in the HCPs as well as the diagnosis made by the AI-based tool. Only a minority of participants thought that the usage of this smartphone application would make HCPs overly reliant, distracted, or lazier. Nonetheless, HCPs need to be adequately trained on how to communicate the inclusion of the AI-device during patient consultations, to avoid misperceptions regarding its use.

Importantly, regarding communication, most participants recognized that smartphones are an important tool to facilitate communication between HCPs and patients. By showing the images of their cervix to patients, HCPs can promote transparency and thus establish trust and reinforce patient-provider communication.

Patient confidentiality

The last factor that is important to address is privacy and confidentiality concerns, which were revealed across all groups regardless of their education level. Concerns regarding videos being posted on social media were highlighted, especially when smartphones are used by students or trainees. These concerns have been underlined in previous studies, such as a meta-synthesis of qualitative studies evaluating public perceptions of AI in healthcare, mostly conducted in the Global North [22]. However, these concerns were highlighted to a lesser extent in studies based in the Global South. A study based in Bangladesh reported that privacy concerns had an insignificant impact on the adoption of eHealth [23]. Similarly, a study by Asgary et al. on the CC screening with smartphones in Ghana reported minimal privacy concerns [24]. Finally, in a systematic review analysing barriers to the use of mobile health in developing countries, privacy and confidentiality concerns were evoked only 3% of the time [25]. However, HCPs should address these concerns by explaining to the patients procedures of data storage and handling of data protection issues.

In summary, to address factors affecting the acceptance of either smartphones or AI-based technology, patients need to be informed adequately by HCPs. Hereby, participants unanimously agreed that an explanation face-to-face with the HCPs would be the best way to provide information regarding the AI-based tool, as it would allow them to ask questions and receive the information that they need. Additionally, it should be acknowledged that some participants (especially those with higher education) were interested in additional means of information, such as brochures or videos.

Strengths and limitations

Although this study is, to the best of our knowledge, one of the first studies in Cameroon that analyses the acceptability of AI for CC screening and one of the few in SSA, some limitations need to be acknowledged. First, an interviewer or selection bias cannot be excluded. As some FGs were conducted in larger groups of 7 participants, participants’ ability to express their opinions freely might be affected. To mitigate this, FGs were limited to women of similar educational backgrounds and were conducted by a Cameroonian anthropologist. However, the anthropologist’s higher level of education may have influenced participants’ responses, especially in FGs of participants with a primary level of education.

Moreover, even if the FGs were comprised of women of diverse socio-economic backgrounds, the participants had already taken part in the CC screening program and therefore may have systematically included those who are inclined towards prevention and have already been sensitized to the importance of CC screening. Furthermore, the study revealed that the acceptability of an AI-based tool can be influenced by the utilisation of smartphones, highlighting the need to explore differences affecting acceptability in larger qualitative and quantitative studies. Additionally, the use of the TAM can be seen as simplistic, and its suitability can be debated, since it is primarily intended for end-users of the technology, which the patients were not in our study. However, in our opinion, using the TAM allowed us to classify and understand existing barriers and facilitators to AI-based diagnostics.

The last limitation can be seen in the methodology of the study. A qualitative methodology was chosen as an appropriate approach since it allowed us to explore the perspectives of females and capture insights into the ways people perceive and interpret their surroundings [14, 26]. But qualitative studies have limited generalizability. However, as saturation was achieved for most themes such as privacy concerns and perceived usefulness, we consider the results of the study to be important for similar settings.

Given the strengths and limitations of our study, further research is needed. Hereby, the following three areas of research seem important. Firstly, to gain a broader insight into patients’ acceptability of AI, a quantitative analysis should be considered exploring the influence of women’s educational status, but also including the perspectives of spouses and other community members, since they have a significant impact on patients’ attitudes. Secondly, misconceptions about the dangers of smartphones in the community should be explored. Finally, acceptability should also be assessed after an initial pilot phase of the AI tool in CC screening, since perspectives may shift after the experience.

留言 (0)

沒有登入
gif