Deep Learning in High-Resolution Anoscopy: Assessing the Impact of Staining and Therapeutic Manipulation on Automated Detection of Anal Cancer Precursors

INTRODUCTION

Anal squamous cell carcinoma (ASCC) is an increasingly prevalent malignancy related to persistent infection with high-risk human papillomavirus strains (1,2). The transformation of persistently infected cells into malignant cells follows a human papillomavirus–dependent pathway, similar to that observed in cervical cancer (3). High-grade squamous intraepithelial lesions (HSIL) constitute the direct precursors of ASCC and high-resolution anoscopy (HRA) coupled with tissue sampling is the gold standard for the detection of these lesions. Screening programs in high-risk populations are aimed at detecting these lesions or early invasive ASCC (4).

The recognition of ASCC precursor lesions during HRA is pivotal to their early treatment. Indeed, the pivotal ANCHOR study has demonstrated that the treatment of HSIL, particularly using office-based ablative treatments, was associated with a lower risk of progression to anal cancer (5). Moreover, HRA remains the standard of practice for the follow-up of patients after the treatment of ASCC precursors to assess for recurrent disease (6,7). Nevertheless, despite its importance in the clinical management of these patients, HRA is not widely available, and expertise in HRA demands a long learning curve, which hinders the efforts to develop HRA-based screening programs for high-risk populations (8).

The expansion of the digital health care concept has been evident over the past years. Clinical specialties heavily reliant on imaging, as is the case of Gastroenterology, are increasingly investing on the development of artificial intelligence (AI) and big data systems to increase diagnostic performance and overall system efficiency (9–11). Preliminary studies have demonstrated the potential of AI algorithms to detect premalignant images using endoscopy images (12,13). A proof-of-concept study aiming to develop an AI algorithm based on convolutional neural networks (CNNs) has shown high performance levels for the detection of ASCC precursors (HSIL) and their differentiation from low-grade squamous intraepithelial lesions (LSIL) (14). Nevertheless, to this date, the impact of technical specificities, particularly staining (both with acetic acid and/or lugol) and previous interventions, on the accuracy of deep learning models has not been assessed. Therefore, this study aimed to evaluate the performance of a deep learning model in detecting and differentiating between HSIL and LSIL under various staining conditions (unstained, acetic acid, or lugol) and after therapeutic interventions.

METHODS Study design and patient selection

The methods for this study are, in general, similar to those described in a previous study (14). We included patients submitted to HRA between 2021 and 2022 at Groupe Hospitalier Paris Saint-Joseph (Paris, France). The HRA procedures were performed using the high-resolution videoproctoscope THD HRStation (THD SpA, Correggio, Italy). Each HRA procedure included in this study had been previously recorded in video files, which were segmented into still images using VLC media player (VideoLAN, Paris, France). These images were retrospectively reviewed, resulting in a total of 27,770 HRA frames representing the anal transition zone that were ultimately extracted. It is important to note that this study was conducted in a noninterventional manner, and no clinical decisions have been made based on the results obtained from this investigation. The study has been approved by the institutional review board of Groupe Hospitalier Paris Saint-Joseph (IRB 00012157) and followed the statements of the Declaration of Helsinki.

HRA procedures

All HRA procedures were conducted using the THD HRStation (THD SpA) by 2 coloproctologists with experience in HRA (L.S. and N.F.). Images from patients diagnosed with LSIL or HSIL were included, based on the histopathological analysis of biopsies obtained during HRA procedures, following the College of American Pathologists protocol (15). We included images from both categories in distinct settings, specifically unstained HRA examinations, using staining with 3% acetic acid solution, using staining with lugol, and after therapeutic manipulation of the anal canal (e.g., after radiofrequency ablation, laser ablation, infrared coagulation, plasma coagulation, or surgical ablation).

Image processing, data set organization, and development of the convolutional neural network

The images were divided into distinct data sets. For the main analysis, which aimed to assess the performance of the CNN for the distinction HSIL vs LSIL, we included a total of 27,770 images (n = 19,114 HSIL and n = 8,656 LSIL). The full data set was divided into training (n = 19,238) and testing (n = 8,532) data sets. This analysis used a patient-split 5-fold cross validation design. In this experiment, the training data set (80% of all data) was split into 5 even-sized folds. The results from this experiment were used to optimize the model parameters, which were subsequently evaluated using the testing data set (20% of all data) (Figure 1). The patient-split design ensured that images from the same patient were restricted to a single fold or to the testing data set, which aimed to avoid data leakage.

F1Figure 1.:

Flowchart of study design. A convolutional neural network was developed based on a total of 27,770 images from 103 HRA examinations. The global performance of the network was assessed using a patient-split 5-fold cross-validation analysis. Subsequently, subanalyses were performed to evaluate the performance of the CNN in the subset of images with no staining, acetic acid, lugol, and after manipulation of the anal canal. AUROC, area under the operator receiving curve; HRA, high-resolution anoscopy; HSIL, high-grade squamous intraepithelial lesion; LSIL, low-grade squamous intraepithelial lesion; NPV, negative predictive value; PPV, positive predictive value.

For the secondary analysis, 4 data sets were designed using unstained HRA images (n = 2,820), staining with 3% acetic acid (n = 13,368), staining with lugol (n = 2,195), and after therapeutic manipulation of the canal anal (n = 9,377). The latter subset of images included frames collected at different stages of therapeutic procedures and were classified by experts as showing areas compatible with previously defined HSIL. For each of these subsets, images were divided using a patient-split approach for constitution of training and testing data sets (Figure 1). For all subanalyses, images of a patient belong to a single fold or to the testing data set.

A circular region of interest (ROI) surrounded by white color was present in all images. Parameter optimization of the HoughCircles filter from OpenCV to 1 circular ROI was used for each frame (16). Subsequently, masks, contours, and crop functions were used to place the extracted ROI in the center of a black image, mirroring the original.

A first experiment, the CNN model was created using the Resnet CNN as a backbone, with its weights trained on ImageNet. Transfer learning to our data occurred by maintaining the model's architecture. The last fully connected layers were removed, and our group attached fully connected layers according to the amount of classes used to classify the HRA images. Two blocks were used by our group, each with a fully connected layer, followed by a dropout layer with a drop rate of 0.3. Subsequently, we add a dense layer with a size defined as the number of categories to classify. The learning rate of 0.00015, batch size of 128, and the number of epochs of 10 were set by trial and error. We used FFMPEG, Pandas, and Pillow libraries to prepare the data and Pytorch to run the model. The analyses were performed with a computer equipped with a 2.1-GHz Intel Xeon Gold 6130 processor (Intel, Santa Clara, CA) and a single NVIDIA RTX A6000 graphic processing unit (NVIDIA, Santa Clara, CA).

Model performance and statistical analysis

The first experiment aimed to provide an outlook of the performance of the CNN in differentiating HSIL and LSIL. At the second experiment, we aimed to estimate the performance of CNN for distinguishing HSIL from LSIL at special circumstances: unstained HRA images, after staining with 3% acetic acid or lugol, and after therapeutic manipulations of the anal canal. The output provided by the algorithm was compared with the gold standard histology (HSIL vs LSIL). Gradient heat maps were generated to identify the areas contributing to the output of the algorithm. The trained CNN computed the probability for each category for every image. The performance measures include the sensitivity, specificity, positive and negative predictive values, and the accuracy. Moreover, each model's discriminating performance was determined by the analysis of the receiver operating characteristic curves. In addition, the image processing capacity of the CNN was determined by calculating the time required for the CNN to provide output for all images in the validation image data set. Sci-Kit learn version 0.22.2 (17) was used for statistical analysis.

RESULTS Performance of the algorithm for the distinction between HSIL and LSIL

A total of 88 patients were enrolled in this study, totaling 103 HRA examinations. Globally, 27,770 HRA images were extracted and used for the construction of the CNN. From these images, 8,656 depicted anal lesions harboring LSIL and 19,114 showed HSIL. These images were divided into training and testing data sets, following a distribution of 70% (n = 19,238, from which 15,002 showed HSIL) and 30% (n = 8,532, from which 4,112 showed HSIL), respectively. Figure 2 represents the classification output provided by the CNN. The highest the probability of the CNN's prediction is, the higher the confidence on the CNN's prediction.

F2Figure 2.:

Output obtained after running the convolutional neural network in different subsets: (a) nonstained; (b) acetic acid; (c) lugol; (d) postmanipulation. HSIL, high-grade squamous intraepithelial lesion; LSIL, low-grade squamous intraepithelial lesion.

The first experiment aimed to determine the performance of the CNN in distinguishing HSIL from LSIL using a patient-split cross-validation approach. At this stage, the training data set was divided into 5 even-sized groups (folds). The performance results of this experiment are summarized in Table 1. Overall, during the training phase, the algorithm achieved a mean sensitivity of 96.5% (95% confidence interval [CI] 91.5%–100.0%), specificity of 94.3% (95% CI 89.5%–99.1%), and an estimated overall accuracy of 96.0% (95% CI 92.9%–99.1%).

Table 1. - Results from the 5-fold cross-validation experiment Sensitivity (%) Specificity (%) PPV (%) NPV (%) Accuracy (%) AUC Fold 1 95.7 97.8 81.1 99.4 85.1 0.990 Fold 2 89.9 96.7 98.7 77.8 91.7 0.990 Fold 3 97.8 96.7 98.8 94.0 97.5 0.998 Fold 4 99.4 90.9 98.1 96.9 97.9 0.990 Fold 5 98.6 89.4 97.7 93.2 97.0 0.990 Overall, mean (95% CI) 96.3 (91.5–100.0) 94.3 (89.5–99.1) 98.5 (97.7–99.3) 89.4 (79.7–99.2) 96.0 (92.9–99.1) 0.992 (0.986–0.998)

AUC, area under the curve; CI, confidence interval; NPV, negative predictive value; PPV, positive predictive value.

In the second experiment, the performance of the fine-tuned CNN was assessed using an independent testing data set, which includes images from patients not included during the training stage (n = 8,532). The matrix confusion combining the CNN predictions and the gold standard histopathological diagnosis is summarized in Table 2. After hyperparameter optimization during the training stage, the algorithm achieved a sensitivity of 97.4%, a specificity of 99.2%, a positive predictive value of 99.1%, a negative predictive value of 97.7%, and an overall accuracy of 98.3%. The area under the curve (AUC) for the distinction of HSIL vs LSIL was 1.00.

Table 2. - Contingency table of the automatic classification by the CNN vs definite histopathological diagnosis Final histopathological diagnosis HSIL LSIL CNN classification HSIL 4,007 36 LSIL 105 4,384

CNN, convolutional neural network; HSIL, high-grade intraepithelial lesion; LSIL, low-grade intraepithelial lesion.


Distinction of HSIL vs LSIL in different subsets of patients

In parallel, we evaluated the performance of the CNN in distinguishing between HSIL and LSIL in different subsets of HRA images, specifically nonstained HRA images (n = 2,820), after acetic acid staining (n = 13,378), after lugol staining (n = 7,234), or after therapeutic interventions to the anal canal (n = 9,377). For each subset analysis, a data set was designed, with constitution of training and validation data sets using a patient-split design. Figure 3 shows the gradient heatmaps showing the regions of the frame most contributing to the output of the CNN.

F3Figure 3.:

Heatmaps demonstrating the areas of the frame most contributing to the predicted label of the convolutional neural network.

The confusion matrixes of the validation data set of each subset are summarized in Table 3. The results are summarized in Table 4. Regarding nonstained HRA images, the network differentiated HSIL from LSIL with a sensitivity of 90.2%, a specificity of 99.5%, and an accuracy of 93.1% (AUC 0.99, Figure 4a). For images stained with acetic acid, the CNN achieved a sensitivity of 99.0%, a specificity of 87.3%, and an overall accuracy of 92.7% (AUC 0.95, Figure 4b). For lugol-stained HRA images, these performance measures achieved 90.2%, 97.2%, and 94.3%, respectively (AUC 0.99, Figure 4c). Finally, when evaluating postmanipulation images, the algorithm showed a sensitivity of 99.6%, specificity of 92.6%, and an accuracy of 98.1% (AUC 0.99, Figure 4d) for the detection of residual HSIL.

Table 3. - Contingency table for the distinction between HSIL and LSIL Subset of patients CNN classification Final diagnosisa HSIL LSIL Nonstained HSIL 445 1 LSIL 48 220 Acetic acid HSIL 704 104 LSIL 7 715 Lugol HSIL 570 25 LSIL 62 862 Postmanipulation HSIL 921 19 LSIL 4 239

CNN, convolutional neural network; HSIL, high-grade intraepithelial lesion; LSIL, low-grade intraepithelial lesion.

aNumber of the images (test data set of each secondary analysis).


Table 4. - Performance of the algorithm for the detection of HSIL vs LSIL in the different subgroups Sensitivity (%) Specificity (%) PPV (%) NPV (%) Accuracy (%) Nonstained 90.3 99.5 99.8 82.1 93.1 Acetic acid 99.0 87.3 87.1 99.0 92.7 Lugol 90.2 97.2 95.8 93.3 94.3 Postmanipulation 99.6 92.6 98.0 98.4 98.1

HSIL, high-grade intraepithelial lesion; LSIL, low-grade intraepithelial lesion; NPV, negative predictive value; PPV, positive predictive value.


F4Figure 4.:

ROC analyses of the network's performance in the detection of HSIL vs LSI under distinct conditions: (a) nonstained; (b) acetic acid; (c) lugol; and (d) postmanipulation. AUC = 1.00 is due to round to 2 decimal place. The true value is 0.99 (>5). AUC, area under the curve; HSIL, high-grade squamous intraepithelial lesion; ROC, receiver operating characteristic.

Image processing performance of the convolutional neural network

At the end of the training stage, the network read 609 batches (77,952 images) in 213 seconds, translating into an overall reading rate of 366 frames per second.

DISCUSSION

The development of AI algorithms for clinical practice has been the focus of intense research over the past years. The intensity of research in this area has been higher in medical fields intrinsically dependent of image interpretation, as is the case of gastrointestinal endoscopy (9). In this field, computer-aided detection and characterization of premalignant and early invasive neoplastic lesions has been identified as a priority for the development of machine learning algorithms (18,19). Thus far, preliminary studies have provided promising results for the detection of both upper GI and colorectal lesions and in the field of pancreatobiliary endoscopy (20–22). Nonetheless, the investigation of these algorithms for the detection and characterization of anorectal pathology remains scarce.

ASCC is a neoplasm that affects particularly vulnerable segments of the population, most significantly people living with HIV and men who have sex with men (1). HRA is the gold standard for the detection of HSIL, the direct precursor of ASCC. The performance of HRA according to the International Anal Neoplasia Society (IANS) recommendation includes the performance of staining with acetic acid and lugol solutions, which allows the identification of areas suspected of harboring HSIL (23). An accurate detection of HSIL is pivotal because HRA allows guided treatment directed to those areas (24). Indeed, the results of the ANCHOR study highlighted the importance of the treatment of HSIL for lowering the risk of progression to ASCC in almost 60% (5). Nevertheless, some concerns remain regarding the cost-effectiveness of HRA as a screening method because this technique remains costly, has limited availability, and the acquisition of expertise demands extensive training over a long learning curve (25,26).

The development of deep learning algorithms for HRA may allow a more accurate detection of areas with higher degrees of cellular atypia, overcoming issues related to the limited interobserver agreement of macroscopic findings of HSIL (27). A recent study by Mascarenhas and coworkers has demonstrated the feasibility of the development of an AI algorithm for HRA (14). The deep learning system achieved high levels of performance, with a sensitivity of 91%, a specificity of 90%, and an overall accuracy of 91% for the detection and differentiation of biopsy-proven HSIL vs LSIL. The algorithm demonstrated excelled in discriminating between HSIL and LSIL, with an AUC of 0.97. Our study builds on those previous results. The aim of this study was to further develop the algorithm and assess its performance under specific circumstances. Indeed, compared with the first exploratory study, we have developed a CNN-based algorithm that used a patient-split, 5-fold cross-validation design, thus overcoming the risk of both data leakage and data overfitting. Using this approach, our algorithm achieved a good discriminating performance, with a sensitivity of 97%, a specificity of 99%, an accuracy of 98%, and an AUC of 0.99. Furthermore, in this study, we evaluated the performance of the network under different conditions, specifically unstained HRA images, using different staining (either acetic acid or lugol) and after therapeutic interventions to the anal canal. Under these conditions, the CNN achieved overall accuracies between 93% and 98% under these conditions. The outcomes of these subanalyses highlight the ability of the algorithm to perform adequately both in unstained and HRA images after staining with acetic acid or lugol. This aspect is of particular importance because an accurate and systematic identification of areas carrying HSIL may enhance the accuracy of HRA-guided biopsies. Notably, the CNN showed an overall accuracy of 98% in distinguishing HSIL from LSIL in HRA images showing the anal canal after therapeutic interventions to treat HSIL or early ASCC. This evidence may have a significant diagnostic and therapeutic impact. Indeed, subsequent development of AI algorithms for HRA may enhance the ability to identify residual or relapsing HSIL, therefore potentially affecting patient management.

This study expands the evidence on applying AI technologies to HRA to identify ASCC precursors. This study has several highlights. First, this work expands on the expertise of our team on the design of deep learning algorithms for HRA. Of importance, significant methodological improvements were introduced, most significantly, the application of a patient-split 5-fold cross-validation analysis. This design ensured that images from the same patient were restricted to the same fold and data set, decreasing the risk of overfitting and, thus, increasing the validity of our results. Second, we explored the application of the CNN to HRA images with different staining and after anorectal interventions for the treatment of HSIL or early ASCC. This allowed us to explore the impact of staining on the performance of the CNN. The algorithm showed high levels of performance across all subgroups, capable of systematically discriminating between HSIL and LSIL with high accuracy. Finally, our deep learning model displayed a high image processing performance, which is paramount for real-time application of AI solutions. Indeed, while the results of this study highlight the potential of AI in enhancing the diagnostic capabilities of HRA, some questions remain to be addressed, including, aspects related to data management, storage, safety, and the interoperability between different HRA systems. The continuous development of deep learning systems and the integration of Big Data may allow the resolution of these challenges.

Nevertheless, this study has some limitations. First, this is a retrospective single-center study evaluating the feasibility and performance of a CNN applied to HRA. Therefore, this study does not assess the potential clinical impact of this technology. Therefore, the clinical validity of our results should be externally assessed in large multicenter randomized clinical trials. We reckon that the clinical usefulness resulting from the real-time application of this algorithm will be the increase in the diagnostic yield of HRA-guided biopsies. Second, this study used images from a single HRA platform. The HRA system consisted of a high-resolution video proctoscope instead of a standard colposcope, as per IANS recommendations. Despite no evidence exists informing the superiority of 1 HRA method over the other, future studies should be designed to include images from several HRA systems, particularly those endorsed by IANS standards. Finally, this study used still HRA images instead of full-length videos. Therefore, future studies should assess the performance of the CNN in real-time full-length videos. The image processing performance achieved in this study indicates that the network would perform adequately in real-time applications. Finally, the ground truth considered for the ultimate classification of HRA images as representing HSIL or LSIL. Despite being the gold standard for the diagnosis of precursor lesions (i.e., HSIL), there is significant interobserver variability in the pathological diagnosis of anal dysplasia, which is something that must be accounted for.

AI is expected to have a large impact in clinical practice. ASCC is an increasingly prevalent neoplasm affecting vulnerable segments of the population. Its screening and diagnosis is intrinsically dependent on HRA. HRA's performance and clinical impact can benefit significantly from developing deep learning algorithms.

CONFLICTS OF INTEREST

Guarantor of the article: Miguel Mascarenhas Saraiva, MD, MSc.

Specific author contributions: M.M.S.: study design, revision of HRA videos, image extraction, construction and development of the CNN, and drafting of the manuscript; critical revision of the manuscript; and bibliographic review. L.S. and N.F.: study design, performance of HRA examinations, revision of HRA videos, image extraction, and critical revision of the manuscript. H.B. and C.M.: study design and critical revision of the manuscript. T.R., J.A., P.C., F.M., and M.M.: bibliographic review, construction and development of the CNN, drafting of the manuscript, and critical revision of the manuscript. M.C. and R.M.: image preparation and processing and construction and development of the CNN. J.P.S.F.: construction and development of the CNN, statistical analysis, and critical revision of the manuscript. J.A., G.M., and V.d.P.: study design and critical revision of the manuscript. All authors approved the final version of the manuscript.

Financial support: None to report.

Potential competing interests: None to report.

Study Highlights

WHAT IS KNOWN ✓ Anal squamous cell carcinoma (ASCC) is a human papillomavirus–related neoplasm, mainly affecting at-risk populations (HIV men who have sex with men, immunosuppression). ✓ High-resolution anoscopy is the gold standard for the detection of ASCC precursors. ✓ Artificial intelligence (AI) has been showing an increasing importance for the detection of premalignant and malignant lesions.

WHAT IS NEW HERE ✓ An AI algorithm showed a good performance for the detection of high-grade intraepithelial squamous lesions. ✓ Our results demonstrate that the AI algorithm retained a good performance at different subsettings (different staining protocols and manipulation of the anal canal). ✓ AI can help the detection of ASCC precursors and can predict the presence of relapsing disease after therapeutic intervention to the anal canal. REFERENCES 1. Smittenaar CR, Petersen KA, Stewart K, et al. Cancer incidence and mortality projections in the UK until 2035. Br J Cancer 2016;115(9):1147–55. 2. Deshmukh AA, Suk R, Shiels MS, et al. Recent trends in squamous cell carcinoma of the anus incidence and mortality in the United States, 2001–2015. J Natl Cancer Inst 2020;112(8):829–38. 3. Maugin F, Lesage AC, Hoyeau N, et al. Early detection of anal high-grade squamous intraepithelial lesion: Do we have an impact on progression to invasive anal carcinoma? J Low Genit Tract Dis 2020;24(1):82–6. 4. Mistrangelo M, Salzano A. Progression of LSIL to HSIL or SCC: Is anoscopy and biopsy good enough? Tech Coloproctol 2019;23(4):303–4. 5. Palefsky JM, Lee JY, Jay N, et al. Treatment of anal high-grade squamous intraepithelial lesions to prevent anal cancer. N Engl J Med 2022;386(24):2273–82. 6. Cappello C, Cuming T, Bowring J, et al. High-resolution anoscopy surveillance after anal squamous cell carcinoma: High-grade squamous intraepithelial lesion detection and treatment may influence local recurrence. Dis Colon Rectum 2020;63(10):1363–71. 7. Chittleborough T, Tapper R, Eglinton T, et al. Anal squamous intraepithelial lesions: An update and proposed management algorithm. Tech Coloproctol 2020;24(2):95–103. 8. Albuquerque A. High-resolution anoscopy: Unchartered territory for gastroenterologists? World J Gastrointest Endosc 2015;7(13):1083–7. 9. Le Berre C, Sandborn WJ, Aridhi S, et al. Application of artificial intelligence to gastroenterology and hepatology. Gastroenterology 2020;158(1):76–94.e2. 10. de-Madaria E, Mira JJ, Carrillo I, et al. The present and future of gastroenterology and hepatology: An international SWOT analysis (the GASTROSWOT project). Lancet Gastroenterol Hepatol 2022;7(5):485–94. 11. Catlow J, Bray B, Morris E, et al. Power of big data to improve patient care in gastroenterology. Frontline Gastroenterol 2022;13(3):237–44. 12. Repici A, Badalamenti M, Maselli R, et al. Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial. Gastroenterology 2020;159(2):512–20.e7. 13. Ferreira JPS, de Mascarenhas Saraiva M, Afonso JPL, et al. Identification of ulcers and erosions by the novel Pillcam Crohn's capsule using a convolutional neural network: A multicentre pilot study. J Crohns Colitis 2022;16(1):169–72. 14. Saraiva MM, Spindler L, Fathallah N, et al. Artificial intelligence and high-resolution anoscopy: Automatic identification of anal squamous cell carcinoma precursors using a convolutional neural network. Tech Coloproctol 2022;26(11):893–900. 15. Burgart LJ, Chopp WV, Jain D. Protocol for the Examination of Excision Specimens From Patients With Carcinoma of the Anus. Version: 4.2.0. 2021. Washington, DC: College of American Pathologists, 2021. 16. Bradsky G. The OpenCV library. Dr Dobbs J Softw Tools 2000;120:122–5. 17. Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: Machine learning in Python. J Machine Learn Res 2011;12:2825–30. 18. Messmann H, Bisschops R, Antonelli G, et al. Expected value of artificial intelligence in gastrointestinal endoscopy: European Society of Gastrointestinal Endoscopy (ESGE) position statement. Endoscopy 2022;54(12):1211–31. 19. Berzin TM, Parasa S, Wallace MB, et al. Position statement on priorities for artificial intelligence in GI endoscopy: A report by the ASGE Task Force. Gastrointest Endosc 2020;92(4):951–9. 20. Wallace MB, Sharma P, Bhandari P, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology 2022;163(1):295–304.e5. 21. Wu L, Xu M, Jiang X, et al. Real-time artificial intelligence for detecting focal lesions and diagnosing neoplasms of the stomach by white-light endoscopy (with videos). Gastrointest Endosc 2022;95(2):269–80.e6. 22. Kuwahara T, Hara K, Mizuno N, et al. Artificial intelligence using deep learning analysis of endoscopic ultrasonography images for the differential diagnosis of pancreatic masses. Endoscopy 2023;55(2):140–9. 23. Hillman RJ, Cuming T, Darragh T, et al. 2016 IANS international guidelines for practice standards in the detection of anal cancer precursors. J Low Genit Tract Dis 2016;20(4):283–91. 24. Albuquerque A, Sheaff M, Stirrup O, et al. Performance of anal cytology compared with high-resolution anoscopy and histology in women with lower anogenital tract neoplasia. Clin Infect Dis 2018;67(8):1262–8. 25. Siegenbeek van Heukelom ML, Marra E, Cairo I, et al. Detection rate of high-grade squamous intraepithelial lesions as a quality assurance metric for high-resolution anoscopy in HIV-positive men. Dis Colon Rectum 2018;61(7):780–6. 26. Clarke MA, Wentzensen N. Strategies for screening and early detection of anal cancers: A narrative and systematic review and meta-analysis of cytology, HPV testing, and other biomarkers. Cancer Cytopathol 2018;126(7):447–60. 27. Richel O, Hallensleben ND, Kreuter A, et al. High-resolution anoscopy: Clinical features of anal intraepithelial neoplasia in HIV-positive men. Dis Colon Rectum 2013;56(11):1237–42.

留言 (0)

沒有登入
gif