Hierarchical segmentation of surgical scenes in laparoscopy

Purpose:

Segmentation of surgical scenes may provide valuable information for real-time guidance and post-operative analysis. However, in some surgical video frames there is unavoidable ambiguity, leading to incorrect predictions of class or missed detections. In this work, we propose a novel method that alleviates this problem by introducing a hierarchy and associated hierarchical inference scheme that allows broad anatomical structures to be predicted when fine-grained structures cannot be reliably distinguished.

Methods:

First, we formulate a multi-label segmentation loss informed by a hierarchy of anatomical classes and then train a network using this. Subsequently, we use a novel leaf-to-root inference scheme (“Hiera-Mix”) to determine the trade-off between label confidence and granularity. This method can be applied to any segmentation model. We evaluate our method using a large laparoscopic cholecystectomy dataset with 65,000 labelled frames.

Results:

We observed an increase in per-structure detection F1 score for the critical structures, when evaluated across their sub-hierarchies, compared to the baseline method: 6.0% for the cystic artery and 2.9% for the cystic duct, driven primarily by increases in precision of 11.3% and 4.7%, respectively. This corresponded to visibly improved segmentation outputs, with better characterisation of the undissected area containing the critical structures and fewer inter-class confusions. For other anatomical classes, which did not stand to benefit from the hierarchy, performance was unimpaired.

Conclusion:

Our proposed hierarchical approach improves surgical scene segmentation in frames with ambiguity, by more suitably reflecting the model’s parsing of the scene. This may be beneficial in applications of surgical scene segmentation, including recent advancements towards computer-assisted intra-operative guidance.

留言 (0)

沒有登入
gif