Multimodal Fusion Model for Classifying Placenta Ultrasound Imaging in Pregnancies with Hypertension Disorders

In the perinatal period, danger to the mother and fetus is closely related to the complications of pregnancy[1], [2], [3]. The complications of pregnancy, including gestational diabetes mellitus (GDM), hypertension disorders of pregnancy (HDP), and foetal growth restriction (FGR), lead in the most severe cases, to premature delivery, foetal death, and even long-term complications for the mother[4].

When the placental function is competent for the normal growth and development of the fetus, there is a higher chance of giving birth to a healthier and more mature baby. Until recently, it has been recognized that obstetric complications affecting foetal outcomes could be placenta-induced, which can precede the clinical manifestations and may be causally related to clinical symptoms[5], [6]. As a result, the placenta has become a new focus for prenatal screening. Especially in regard to pregnancy complications and even foetal death, studying the placenta can provide more information than studying the fetus[7], [8]. Ultrasound (US) has been recognized as an effective technique for examining placental diseases[9]. Greyscale ultrasound can intuitively detect any obvious structural abnormalities of the placenta, and the microflow function can sensitively capture the distribution of microvessels in the tissues. Sebire[10] stated that a prenatal ultrasound can identify a range of features associated with placental pathology. Specific pathological correlation of prenatal features should be provided as far as possible[10]. In FGR, due to an impaired function, the placenta exhibits a range of macroscopic to microscopic abnormalities[11], [12]. However, little is known about changes in placental features in HDPs. The more we recognize the placental features, the better we can grasp and understand the occurrence and development of obstetric complications.

However, the naked eye's ability to distinguish these features is limited. AI (artificial intelligence) has gradually been widely used in the medical field. In addition, the continuous progress of deep learning, image recognition and other technologies will also promote the maturity of the entire intelligent medical industry chain. The intellectualization of ultrasound diagnosis and treatment can obtain more accurate diagnosis suggestions and personalized treatment plan. Deep learning technology can use vast quantities of graphics processing units to recognize abstract and complex graphical features[13]. It provides a method for extracting features from tissue images that are not easily visible to the naked eye. QuantusFLM®, a non-invasive software tool based on deep learning techniques, was reported to predict the risk of neonatal respiratory morbidity (NRM) by the delineation of foetal lung ultrasound images among late preterm deliveries[14]. A proposed fine-tuned GoogLeNet model for classifying thyroid nodules in ultrasound images also achieved an excellent performance[15].So what this showed was that deep learning technology has been widely and effectively applied in medical imaging analysis.

In addition, we found that numerous studies have shown that image features such as the texture features and structural features extracted from greyscale images (GSIs) can reflect the maturity and function of the placenta. Chen et al. (2010)[16] proposed a radiological method to evaluate the confidence between the texture features of GSIs and the gestational age and placental maturity. Li et al. (2015)[17] used visual features to automatically stage placental maturity via a multilayer Fisher vector. In recent years, microflow imaging has been gradually used in gynaecological ultrasounds. The microflow images (MFIs) show the distribution and intensity of the blood flow, which is an important symbol to reflect the placenta's maturity and function. However, there is currently no relevant research on automatic analysis of placental MFIs.

Therefore, the purpose of this study was to identify the differences in the placental features between HDPs and normal pregnancies and to introduce a deep learning model, named GMNet, based on the combination of multimodal feature fusion (GSIs and MFIs), for evaluating and classifying placental features in pregnancies complicated by HDPs. We hypothesized that the model might assist the traditional visual diagnosis in evaluating obstetric complications, which are based on changes in pathological information (villus density, number of blood vessels, syncytial nodules, and cellulose in the placenta). After multimodal image training, we believe that the model would achieve this distinction excellently. The GMNet model would be a feasible tool in the identification of placental tissue abnormalities in the ultrasound images. It might not only provide a reference value for diagnosis and treatment in clinical practice but also could broaden the application of the field of artificial intelligence in medical imaging.

留言 (0)

沒有登入
gif