Assessing the need for coronary angiography in high-risk non-ST-elevation acute coronary syndrome patients using artificial intelligence and computed tomography

To the best of our knowledge, this is the first study assessing the potential role of AI, specifically the ChatGPT model, to guide decisions on whether to perform ICA in high-risk NSTE-ACS patients who have undergone CCTA. The findings suggest that ChatGPT could offer valuable guidance in this challenging clinical scenario, where even with CCTA - that can reveal a wide range of findings, from normal arteries to occlusion, as well as many borderline lesions – the decision whether to perform ICA might not be straightforward.

A significant finding is that among the three patients with a culprit lesion where ChatGPT recommended against ICA, the model was misled by CCTA results that missed a lesion in two cases. In the third case, the lack of additional information on the type or physiology of the plaques may have influenced ChatGPT’s judgment. This underscores the dependence of AI accuracy on the quality of input data, highlighting the necessity of accurate and reliable data for training and testing AI models. Interestingly, in nine cases where ChatGPT recommended ICA, no coronary lesion or significant cardiovascular risk factors were present. This confirms the need for comprehensive input data to maximize the accuracy of AI recommendations. However, the ‘black box’ nature of AI decision-making, as mentioned in previous studies, remains a significant challenge, as the underlying logic of AI recommendations is often opaque [7]. Thus, the accuracy of the information fed to the AI model could potentially be enhanced through the use of other AI tools to interpret imaging to potentially improve the precision of the AI’s decision-making process.

Despite the lack of formal integration of AI into current clinical guidelines, its applications in cardiology are expanding rapidly. AI has shown promise in various imaging modalities and decision-making processes [8, 9]. Recently, we demonstrated that ChatGPT could potentially aid in decision-making process during heart-team meetings for patients with severe aortic stenosis through complex clinical scenarios involving only few variables [3]. Indeed, AI’s model treatment recommendation was compared with the decision of the Heart Team, and the model was in phase with the Heart Team in 77% of the situation. One aspect we highlight is the importance of the information provided to these Large Language Model (LLM) models, which could theoretically improve their accuracy. To enhance the specificity of this process, incorporating a refined CCTA analysis, as previously discussed, along with an analysis of plaque physiology and characterization, could be beneficial [10]. AI software could help identify and extract data from plaque burden, describe the presence of calcified or non-calcified lesions, and determine their degree of stenosis as reported in chronic coronary syndrome [11,12,13]. Subhi et al. found that predictive models based on CCTA could accurately identify future culprit lesions with a specificity of 89.3%, indicating the potential for AI to enhance predictive accuracy through detailed analysis [14]. Another recent study assessed the efficacy of an AI-based tool in assessing coronary stenosis by comparing measured lesion in CCTA against a benchmark standard (ICA), and found it to exhibit a high diagnostic accuracy [15]. Here, we can observe the potential of decision-making AI by taking into account the medical history, biological assessment, ECG, and the detailed results of the CTCA on a single prompt. This approach aims to deliver the highest quality of care to patients through the utilization of AI, while also accelerating diagnoses and treatment [9].

The use of CCTA in this acute phase has been demonstrated to be safe and helps to limit invasive examinations in emergency room [16]. The use of CCTA for patients with low to moderate risk of ACS has been incorporated into the latest guidelines due to its high negative predictive value [1, 17, 18]. In addition to its value as a “rule out” exam, CCTA is as effective as ICA in assessing long-term risk when performed in the acute phase [19]. The future lies also in the development of coronary physiology using CCTA, which has been applied in chronic coronary syndrome but not yet in ACS. A systematic review and meta-analysis showed a moderate agreement between FFR-computed tomography (CT) and ICA, with the highest agreement with invasive FFR values greater than 0.90. Our study will hopefully try to respond to that question in acute coronary syndrome [4].

Our study found reassuring results in the AI decision-making process, with an overall accuracy of 86%, which is excellent for a non-trained AI model, albeit with a specificity of 64%. The advent of current and future techniques will accelerate all processes involved in the management of these patients and will further improve the sensitivity and specificity of the decision-making process. However, we acknowledge certain limitations: the significance of CCTA-derived information in this context, where CCTA is not typically recommended, is unknown; the potential variability in ChatGPT’s responses has not been examined, though our aim was to simulate a real-life scenario where a physician might use ChatGPT as a consultative tool for decision-making. Moreover, this study was neither intended nor powered to compare diagnostic modalities directly; it was designed as a feasibility study to assess if AI could manage complex clinical situations and aid in decision-making.

Finally, in the preliminary phase of implementing AI in clinical settings, it is crucial not to blindly follow the advice of a LLM. LLMs do not possess real-time updates or a deep understanding, which could result in confident yet erroneous conclusions. Additionally, the complexity of AI decision-making and the opaque “black box” nature of its algorithms can obscure the logic behind its recommendations, posing significant concerns in critical fields such as healthcare. In the context of a prospective future study, a head-to-head comparison between physician and ChatGPT would allow for a better assessment of diagnostic accuracy. However, due to our study’s design, this could not be accomplished.

留言 (0)

沒有登入
gif