A Computational Framework for Intraoperative Pupil Analysis in Cataract Surgery

Abstract

Purpose: Pupillary instability is a known risk factor for complications in cataract surgery. This study aims to develop and validate an innovative and reliable computational framework for the automated assessment of pupil morphologic changes during the various phases of cataract surgery. Design: Retrospective surgical video analysis. Subjects: Two hundred forty complete surgical video recordings, among which 190 surgeries were conducted without the use of pupil expansion devices and 50 were performed with the use of a pupil expansion device. Methods: The proposed framework consists of three stages: feature extraction, deep learning (DL)-based anatomy recognition, and obstruction detection/compensation. In the first stage, surgical video frames undergo noise reduction using a tensor-based wavelet feature extraction method. In the second stage, DL-based segmentation models are trained and employed to segment the pupil, limbus, and palpebral fissure. In the third stage, obstructed visualization of the pupil is detected and compensated for using a DL-based algorithm. A dataset of 5,700 intraoperative video frames across 190 cataract surgeries in the BigCat database was collected for validating algorithm performance. Main Outcome Measures: The pupil analysis framework was assessed on the basis of segmentation performance for both obstructed and unobstructed pupils. Classification performance of models utilizing the segmented pupil time series to predict surgeon use of a pupil expansion device was also assessed. Results: An architecture based on the FPN model with VGG16 backbone integrated with the AWTFE feature extraction method demonstrated the highest performance in anatomy segmentation, with Dice coefficient of 96.52%. Incorporation of an obstruction compensation algorithm improved performance further (Dice 96.82%). Downstream analysis of framework output enabled the development of an SVM-based classifier that could predict surgeon usage of a pupil expansion device prior to its placement with 96.67% accuracy and AUC of 99.44%. Conclusions: The experimental results demonstrate that the proposed framework 1) provides high accuracy in pupil analysis compared to human-annotated ground truth, 2) substantially outperforms isolated use of a DL segmentation model, and 3) can enable downstream analytics with clinically valuable predictive capacity.

Competing Interest Statement

The authors have declared no competing interest.

Funding Statement

GME Innovations Fund (NN, BT); The Doctors Company Foundation (NN, BT); NIH K12EY022299 (NN); Fogarty/NIH D43TW012027 (NN, KS).

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

Yes

The details of the IRB/oversight body that provided approval or exemption for the research described are given below:

The study was approved (HUM00160950) by the Michigan Medicine IRB (IRBMED) in May of 2019.

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

Yes

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

Yes

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

Yes

Data Availability

Data produced in the present study are not publicly available.

留言 (0)

沒有登入
gif